2025-02-10 08:30:47.378892 | Job console starting... 2025-02-10 08:30:47.394173 | Updating repositories 2025-02-10 08:30:47.466758 | Preparing job workspace 2025-02-10 08:30:49.320008 | Running Ansible setup... 2025-02-10 08:30:54.756760 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-02-10 08:30:55.501066 | 2025-02-10 08:30:55.501233 | PLAY [Base pre] 2025-02-10 08:30:55.533208 | 2025-02-10 08:30:55.533352 | TASK [Setup log path fact] 2025-02-10 08:30:55.566813 | orchestrator | ok 2025-02-10 08:30:55.590127 | 2025-02-10 08:30:55.590259 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 08:30:55.625865 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.645358 | 2025-02-10 08:30:55.645570 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 08:30:55.708432 | orchestrator | ok 2025-02-10 08:30:55.719270 | 2025-02-10 08:30:55.719398 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 08:30:55.774359 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.783839 | 2025-02-10 08:30:55.783959 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 08:30:55.808187 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.817656 | 2025-02-10 08:30:55.817785 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 08:30:55.842081 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.853984 | 2025-02-10 08:30:55.854132 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 08:30:55.879874 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:55.916144 | 2025-02-10 08:30:55.916308 | TASK [emit-job-header : Print job information] 2025-02-10 08:30:55.987910 | # Job Information 2025-02-10 08:30:55.988213 | Ansible Version: 2.15.3 2025-02-10 08:30:55.988260 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-02-10 08:30:55.988301 | Pipeline: post 2025-02-10 08:30:55.988331 | Executor: 7d211f194f6a 2025-02-10 08:30:55.988357 | Triggered by: https://github.com/osism/testbed/commit/88c9a01550409e69f921ca14c30503ff015e9804 2025-02-10 08:30:55.988383 | Event ID: 4f37aa8a-e789-11ef-84dd-ee02a0248723 2025-02-10 08:30:55.997413 | 2025-02-10 08:30:55.997530 | LOOP [emit-job-header : Print node information] 2025-02-10 08:30:56.160997 | orchestrator | ok: 2025-02-10 08:30:56.161199 | orchestrator | # Node Information 2025-02-10 08:30:56.161232 | orchestrator | Inventory Hostname: orchestrator 2025-02-10 08:30:56.161256 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-02-10 08:30:56.161277 | orchestrator | Username: zuul-testbed03 2025-02-10 08:30:56.161297 | orchestrator | Distro: Debian 12.9 2025-02-10 08:30:56.161317 | orchestrator | Provider: static-testbed 2025-02-10 08:30:56.161335 | orchestrator | Label: testbed-orchestrator 2025-02-10 08:30:56.161354 | orchestrator | Product Name: OpenStack Nova 2025-02-10 08:30:56.161374 | orchestrator | Interface IP: 81.163.193.140 2025-02-10 08:30:56.188974 | 2025-02-10 08:30:56.189110 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-02-10 08:30:56.664090 | orchestrator -> localhost | changed 2025-02-10 08:30:56.673983 | 2025-02-10 08:30:56.674098 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-02-10 08:30:57.723835 | orchestrator -> localhost | changed 2025-02-10 08:30:57.747140 | 2025-02-10 08:30:57.747379 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-02-10 08:30:58.041163 | orchestrator -> localhost | ok 2025-02-10 08:30:58.060113 | 2025-02-10 08:30:58.060276 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-02-10 08:30:58.112529 | orchestrator | ok 2025-02-10 08:30:58.135936 | orchestrator | included: /var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-02-10 08:30:58.145358 | 2025-02-10 08:30:58.145464 | TASK [add-build-sshkey : Create Temp SSH key] 2025-02-10 08:30:58.854156 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-02-10 08:30:58.854527 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/work/eea07dfd5b714acba1304c52e3867367_id_rsa 2025-02-10 08:30:58.854607 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/work/eea07dfd5b714acba1304c52e3867367_id_rsa.pub 2025-02-10 08:30:58.854665 | orchestrator -> localhost | The key fingerprint is: 2025-02-10 08:30:58.854741 | orchestrator -> localhost | SHA256:sL1nr5v2/Isly03v+SobkwbrxMeAcHPiTp8/c1uO6As zuul-build-sshkey 2025-02-10 08:30:58.854793 | orchestrator -> localhost | The key's randomart image is: 2025-02-10 08:30:58.854848 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-02-10 08:30:58.854896 | orchestrator -> localhost | | | 2025-02-10 08:30:58.854941 | orchestrator -> localhost | | | 2025-02-10 08:30:58.854986 | orchestrator -> localhost | | o + . | 2025-02-10 08:30:58.855029 | orchestrator -> localhost | | B = | 2025-02-10 08:30:58.855072 | orchestrator -> localhost | | . S o | 2025-02-10 08:30:58.855115 | orchestrator -> localhost | | o + * . | 2025-02-10 08:30:58.855160 | orchestrator -> localhost | | o E O o .| 2025-02-10 08:30:58.855205 | orchestrator -> localhost | | =.X+@.=.| 2025-02-10 08:30:58.855249 | orchestrator -> localhost | | .==&OB**| 2025-02-10 08:30:58.855292 | orchestrator -> localhost | +----[SHA256]-----+ 2025-02-10 08:30:58.855386 | orchestrator -> localhost | ok: Runtime: 0:00:00.198575 2025-02-10 08:30:58.874106 | 2025-02-10 08:30:58.874270 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-02-10 08:30:58.928318 | orchestrator | ok 2025-02-10 08:30:58.945034 | orchestrator | included: /var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-02-10 08:30:58.958109 | 2025-02-10 08:30:58.958237 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-02-10 08:30:58.994405 | orchestrator | skipping: Conditional result was False 2025-02-10 08:30:59.004371 | 2025-02-10 08:30:59.004508 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-02-10 08:30:59.660601 | orchestrator | changed 2025-02-10 08:30:59.670871 | 2025-02-10 08:30:59.670996 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-02-10 08:30:59.987509 | orchestrator | ok 2025-02-10 08:30:59.997021 | 2025-02-10 08:30:59.997144 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-02-10 08:31:00.448220 | orchestrator | ok 2025-02-10 08:31:00.456116 | 2025-02-10 08:31:00.456232 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-02-10 08:31:00.865862 | orchestrator | ok 2025-02-10 08:31:00.876780 | 2025-02-10 08:31:00.876912 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-02-10 08:31:00.914094 | orchestrator | skipping: Conditional result was False 2025-02-10 08:31:00.932937 | 2025-02-10 08:31:00.933073 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-02-10 08:31:01.347854 | orchestrator -> localhost | changed 2025-02-10 08:31:01.363774 | 2025-02-10 08:31:01.363905 | TASK [add-build-sshkey : Add back temp key] 2025-02-10 08:31:01.737040 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/work/eea07dfd5b714acba1304c52e3867367_id_rsa (zuul-build-sshkey) 2025-02-10 08:31:01.737437 | orchestrator -> localhost | ok: Runtime: 0:00:00.015411 2025-02-10 08:31:01.752109 | 2025-02-10 08:31:01.752256 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-02-10 08:31:02.156963 | orchestrator | ok 2025-02-10 08:31:02.166451 | 2025-02-10 08:31:02.166579 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-02-10 08:31:02.202140 | orchestrator | skipping: Conditional result was False 2025-02-10 08:31:02.227393 | 2025-02-10 08:31:02.227527 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-02-10 08:31:02.669173 | orchestrator | ok 2025-02-10 08:31:02.691780 | 2025-02-10 08:31:02.692369 | TASK [validate-host : Define zuul_info_dir fact] 2025-02-10 08:31:02.735287 | orchestrator | ok 2025-02-10 08:31:02.745752 | 2025-02-10 08:31:02.745875 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-02-10 08:31:03.103787 | orchestrator -> localhost | ok 2025-02-10 08:31:03.124799 | 2025-02-10 08:31:03.124950 | TASK [validate-host : Collect information about the host] 2025-02-10 08:31:04.354967 | orchestrator | ok 2025-02-10 08:31:04.370957 | 2025-02-10 08:31:04.371081 | TASK [validate-host : Sanitize hostname] 2025-02-10 08:31:04.451920 | orchestrator | ok 2025-02-10 08:31:04.461921 | 2025-02-10 08:31:04.462061 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-02-10 08:31:05.031883 | orchestrator -> localhost | changed 2025-02-10 08:31:05.039916 | 2025-02-10 08:31:05.040054 | TASK [validate-host : Collect information about zuul worker] 2025-02-10 08:31:05.563660 | orchestrator | ok 2025-02-10 08:31:05.574902 | 2025-02-10 08:31:05.575058 | TASK [validate-host : Write out all zuul information for each host] 2025-02-10 08:31:06.139660 | orchestrator -> localhost | changed 2025-02-10 08:31:06.156379 | 2025-02-10 08:31:06.156522 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-02-10 08:31:06.432704 | orchestrator | ok 2025-02-10 08:31:06.440553 | 2025-02-10 08:31:06.440675 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-02-10 08:32:07.428861 | orchestrator | changed: 2025-02-10 08:32:07.429060 | orchestrator | .d..t...... src/ 2025-02-10 08:32:07.429100 | orchestrator | .d..t...... src/github.com/ 2025-02-10 08:32:07.429128 | orchestrator | .d..t...... src/github.com/osism/ 2025-02-10 08:32:07.429153 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-02-10 08:32:07.429178 | orchestrator | RedHat.yml 2025-02-10 08:32:07.448744 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-02-10 08:32:07.448765 | orchestrator | RedHat.yml 2025-02-10 08:32:07.448828 | orchestrator | = 2.2.0"... 2025-02-10 08:32:23.026506 | orchestrator | 08:32:23.026 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-02-10 08:32:23.083777 | orchestrator | 08:32:23.083 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-02-10 08:32:24.104751 | orchestrator | 08:32:24.104 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-02-10 08:32:24.863667 | orchestrator | 08:32:24.863 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-02-10 08:32:25.779382 | orchestrator | 08:32:25.778 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-02-10 08:32:26.555095 | orchestrator | 08:32:26.554 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-02-10 08:32:27.487849 | orchestrator | 08:32:27.487 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-02-10 08:32:28.553547 | orchestrator | 08:32:28.553 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-02-10 08:32:28.553743 | orchestrator | 08:32:28.553 STDOUT terraform: Providers are signed by their developers. 2025-02-10 08:32:28.553779 | orchestrator | 08:32:28.553 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-02-10 08:32:28.553806 | orchestrator | 08:32:28.553 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-02-10 08:32:28.553848 | orchestrator | 08:32:28.553 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-02-10 08:32:28.554189 | orchestrator | 08:32:28.553 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-02-10 08:32:28.554298 | orchestrator | 08:32:28.553 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-02-10 08:32:28.554318 | orchestrator | 08:32:28.553 STDOUT terraform: you run "tofu init" in the future. 2025-02-10 08:32:28.554341 | orchestrator | 08:32:28.553 STDOUT terraform: OpenTofu has been successfully initialized! 2025-02-10 08:32:28.851337 | orchestrator | 08:32:28.554 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-02-10 08:32:28.851423 | orchestrator | 08:32:28.554 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-02-10 08:32:28.851431 | orchestrator | 08:32:28.554 STDOUT terraform: should now work. 2025-02-10 08:32:28.851437 | orchestrator | 08:32:28.554 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-02-10 08:32:28.851444 | orchestrator | 08:32:28.554 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-02-10 08:32:28.851450 | orchestrator | 08:32:28.554 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-02-10 08:32:28.851468 | orchestrator | 08:32:28.850 STDOUT terraform: Created and switched to workspace "ci"! 2025-02-10 08:32:29.126790 | orchestrator | 08:32:28.851 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-02-10 08:32:29.126968 | orchestrator | 08:32:28.851 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-02-10 08:32:29.126996 | orchestrator | 08:32:28.851 STDOUT terraform: for this configuration. 2025-02-10 08:32:29.127046 | orchestrator | 08:32:29.126 STDOUT terraform: ci.auto.tfvars 2025-02-10 08:32:30.268547 | orchestrator | 08:32:30.268 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-02-10 08:32:30.790888 | orchestrator | 08:32:30.790 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-02-10 08:32:31.098924 | orchestrator | 08:32:31.098 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-02-10 08:32:31.099012 | orchestrator | 08:32:31.098 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-02-10 08:32:31.099021 | orchestrator | 08:32:31.098 STDOUT terraform:  + create 2025-02-10 08:32:31.099030 | orchestrator | 08:32:31.098 STDOUT terraform:  <= read (data resources) 2025-02-10 08:32:31.099128 | orchestrator | 08:32:31.098 STDOUT terraform: OpenTofu will perform the following actions: 2025-02-10 08:32:31.099135 | orchestrator | 08:32:31.098 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-02-10 08:32:31.099141 | orchestrator | 08:32:31.098 STDOUT terraform:  # (config refers to values not yet known) 2025-02-10 08:32:31.099157 | orchestrator | 08:32:31.099 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-02-10 08:32:31.099206 | orchestrator | 08:32:31.099 STDOUT terraform:  + checksum = (known after apply) 2025-02-10 08:32:31.099213 | orchestrator | 08:32:31.099 STDOUT terraform:  + created_at = (known after apply) 2025-02-10 08:32:31.099219 | orchestrator | 08:32:31.099 STDOUT terraform:  + file = (known after apply) 2025-02-10 08:32:31.099226 | orchestrator | 08:32:31.099 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.099283 | orchestrator | 08:32:31.099 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.099291 | orchestrator | 08:32:31.099 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-10 08:32:31.099361 | orchestrator | 08:32:31.099 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-10 08:32:31.099368 | orchestrator | 08:32:31.099 STDOUT terraform:  + most_recent = true 2025-02-10 08:32:31.099374 | orchestrator | 08:32:31.099 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.099439 | orchestrator | 08:32:31.099 STDOUT terraform:  + protected = (known after apply) 2025-02-10 08:32:31.099445 | orchestrator | 08:32:31.099 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.099456 | orchestrator | 08:32:31.099 STDOUT terraform:  + schema = (known after apply) 2025-02-10 08:32:31.099517 | orchestrator | 08:32:31.099 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-10 08:32:31.099523 | orchestrator | 08:32:31.099 STDOUT terraform:  + tags = (known after apply) 2025-02-10 08:32:31.099536 | orchestrator | 08:32:31.099 STDOUT terraform:  + updated_at = (known after apply) 2025-02-10 08:32:31.099541 | orchestrator | 08:32:31.099 STDOUT terraform:  } 2025-02-10 08:32:31.099567 | orchestrator | 08:32:31.099 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-02-10 08:32:31.099614 | orchestrator | 08:32:31.099 STDOUT terraform:  # (config refers to values not yet known) 2025-02-10 08:32:31.099623 | orchestrator | 08:32:31.099 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-02-10 08:32:31.099630 | orchestrator | 08:32:31.099 STDOUT terraform:  + checksum = (known after apply) 2025-02-10 08:32:31.099695 | orchestrator | 08:32:31.099 STDOUT terraform:  + created_at = (known after apply) 2025-02-10 08:32:31.099703 | orchestrator | 08:32:31.099 STDOUT terraform:  + file = (known after apply) 2025-02-10 08:32:31.099770 | orchestrator | 08:32:31.099 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.099850 | orchestrator | 08:32:31.099 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.099858 | orchestrator | 08:32:31.099 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-10 08:32:31.099924 | orchestrator | 08:32:31.099 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-10 08:32:31.099930 | orchestrator | 08:32:31.099 STDOUT terraform:  + most_recent = true 2025-02-10 08:32:31.099936 | orchestrator | 08:32:31.099 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.099942 | orchestrator | 08:32:31.099 STDOUT terraform:  + protected = (known after apply) 2025-02-10 08:32:31.100002 | orchestrator | 08:32:31.099 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.100008 | orchestrator | 08:32:31.099 STDOUT terraform:  + schema = (known after apply) 2025-02-10 08:32:31.100015 | orchestrator | 08:32:31.099 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-10 08:32:31.100080 | orchestrator | 08:32:31.099 STDOUT terraform:  + tags = (known after apply) 2025-02-10 08:32:31.100087 | orchestrator | 08:32:31.099 STDOUT terraform:  + updated_at = (known after apply) 2025-02-10 08:32:31.100093 | orchestrator | 08:32:31.100 STDOUT terraform:  } 2025-02-10 08:32:31.100159 | orchestrator | 08:32:31.100 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-02-10 08:32:31.100165 | orchestrator | 08:32:31.100 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-02-10 08:32:31.100171 | orchestrator | 08:32:31.100 STDOUT terraform:  + content = (known after apply) 2025-02-10 08:32:31.100235 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:31.100242 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:31.100313 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:31.100323 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:31.100390 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:31.100396 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:31.100402 | orchestrator | 08:32:31.100 STDOUT terraform:  + directory_permission = "0777" 2025-02-10 08:32:31.100468 | orchestrator | 08:32:31.100 STDOUT terraform:  + file_permission = "0644" 2025-02-10 08:32:31.100478 | orchestrator | 08:32:31.100 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-02-10 08:32:31.100485 | orchestrator | 08:32:31.100 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.100548 | orchestrator | 08:32:31.100 STDOUT terraform:  } 2025-02-10 08:32:31.100556 | orchestrator | 08:32:31.100 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-02-10 08:32:31.100626 | orchestrator | 08:32:31.100 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-02-10 08:32:31.100635 | orchestrator | 08:32:31.100 STDOUT terraform:  + content = (known after apply) 2025-02-10 08:32:31.100704 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:31.100712 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:31.100718 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:31.100724 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:31.100783 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:31.100858 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:31.100866 | orchestrator | 08:32:31.100 STDOUT terraform:  + directory_permission = "0777" 2025-02-10 08:32:31.100937 | orchestrator | 08:32:31.100 STDOUT terraform:  + file_permission = "0644" 2025-02-10 08:32:31.100943 | orchestrator | 08:32:31.100 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-02-10 08:32:31.100949 | orchestrator | 08:32:31.100 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.101014 | orchestrator | 08:32:31.100 STDOUT terraform:  } 2025-02-10 08:32:31.101020 | orchestrator | 08:32:31.100 STDOUT terraform:  # local_file.inventory will be created 2025-02-10 08:32:31.101025 | orchestrator | 08:32:31.100 STDOUT terraform:  + resource "local_file" "inventory" { 2025-02-10 08:32:31.101032 | orchestrator | 08:32:31.100 STDOUT terraform:  + content = (known after apply) 2025-02-10 08:32:31.101092 | orchestrator | 08:32:31.100 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:31.101099 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:31.101171 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:31.101178 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:31.101184 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:31.101190 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:31.101251 | orchestrator | 08:32:31.101 STDOUT terraform:  + directory_permission = "0777" 2025-02-10 08:32:31.101326 | orchestrator | 08:32:31.101 STDOUT terraform:  + file_permission = "0644" 2025-02-10 08:32:31.101335 | orchestrator | 08:32:31.101 STDOUT terraform:  + filename = "inventory.ci" 2025-02-10 08:32:31.101341 | orchestrator | 08:32:31.101 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.101350 | orchestrator | 08:32:31.101 STDOUT terraform:  } 2025-02-10 08:32:31.101357 | orchestrator | 08:32:31.101 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-02-10 08:32:31.101364 | orchestrator | 08:32:31.101 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-02-10 08:32:31.101410 | orchestrator | 08:32:31.101 STDOUT terraform:  + content = (sensitive value) 2025-02-10 08:32:31.101487 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-10 08:32:31.101495 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-10 08:32:31.101502 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-10 08:32:31.101566 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-10 08:32:31.101643 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-10 08:32:31.101650 | orchestrator | 08:32:31.101 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-10 08:32:31.101723 | orchestrator | 08:32:31.101 STDOUT terraform:  + directory_permission = "0700" 2025-02-10 08:32:31.101803 | orchestrator | 08:32:31.101 STDOUT terraform:  + file_permission = "0600" 2025-02-10 08:32:31.101808 | orchestrator | 08:32:31.101 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-02-10 08:32:31.101815 | orchestrator | 08:32:31.101 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.101878 | orchestrator | 08:32:31.101 STDOUT terraform:  } 2025-02-10 08:32:31.101884 | orchestrator | 08:32:31.101 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-02-10 08:32:31.101889 | orchestrator | 08:32:31.101 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-02-10 08:32:31.101895 | orchestrator | 08:32:31.101 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.101952 | orchestrator | 08:32:31.101 STDOUT terraform:  } 2025-02-10 08:32:31.101958 | orchestrator | 08:32:31.101 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-02-10 08:32:31.101964 | orchestrator | 08:32:31.101 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-02-10 08:32:31.102043 | orchestrator | 08:32:31.101 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.102052 | orchestrator | 08:32:31.101 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.102127 | orchestrator | 08:32:31.101 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.102133 | orchestrator | 08:32:31.102 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.102139 | orchestrator | 08:32:31.102 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.102204 | orchestrator | 08:32:31.102 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-02-10 08:32:31.102280 | orchestrator | 08:32:31.102 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.102288 | orchestrator | 08:32:31.102 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.102357 | orchestrator | 08:32:31.102 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.102368 | orchestrator | 08:32:31.102 STDOUT terraform:  } 2025-02-10 08:32:31.102376 | orchestrator | 08:32:31.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-02-10 08:32:31.102435 | orchestrator | 08:32:31.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:31.102441 | orchestrator | 08:32:31.102 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.102448 | orchestrator | 08:32:31.102 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.102453 | orchestrator | 08:32:31.102 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.102459 | orchestrator | 08:32:31.102 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.102515 | orchestrator | 08:32:31.102 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.102522 | orchestrator | 08:32:31.102 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-02-10 08:32:31.102618 | orchestrator | 08:32:31.102 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.102692 | orchestrator | 08:32:31.102 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.102698 | orchestrator | 08:32:31.102 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.102703 | orchestrator | 08:32:31.102 STDOUT terraform:  } 2025-02-10 08:32:31.102709 | orchestrator | 08:32:31.102 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-02-10 08:32:31.102772 | orchestrator | 08:32:31.102 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:31.102848 | orchestrator | 08:32:31.102 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.102855 | orchestrator | 08:32:31.102 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.102926 | orchestrator | 08:32:31.102 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.102933 | orchestrator | 08:32:31.102 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.103024 | orchestrator | 08:32:31.102 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.103032 | orchestrator | 08:32:31.102 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-02-10 08:32:31.103039 | orchestrator | 08:32:31.102 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.103044 | orchestrator | 08:32:31.103 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.103052 | orchestrator | 08:32:31.103 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.103162 | orchestrator | 08:32:31.103 STDOUT terraform:  } 2025-02-10 08:32:31.103170 | orchestrator | 08:32:31.103 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-02-10 08:32:31.103381 | orchestrator | 08:32:31.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:31.103388 | orchestrator | 08:32:31.103 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.103395 | orchestrator | 08:32:31.103 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.103763 | orchestrator | 08:32:31.103 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.103776 | orchestrator | 08:32:31.103 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.103781 | orchestrator | 08:32:31.103 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.103786 | orchestrator | 08:32:31.103 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-02-10 08:32:31.103791 | orchestrator | 08:32:31.103 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.103796 | orchestrator | 08:32:31.103 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.103801 | orchestrator | 08:32:31.103 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.103806 | orchestrator | 08:32:31.103 STDOUT terraform:  } 2025-02-10 08:32:31.103813 | orchestrator | 08:32:31.103 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-02-10 08:32:31.104147 | orchestrator | 08:32:31.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:31.104158 | orchestrator | 08:32:31.103 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.104163 | orchestrator | 08:32:31.103 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.104169 | orchestrator | 08:32:31.103 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.104174 | orchestrator | 08:32:31.103 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.104179 | orchestrator | 08:32:31.103 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.104184 | orchestrator | 08:32:31.103 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-02-10 08:32:31.104189 | orchestrator | 08:32:31.103 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.104194 | orchestrator | 08:32:31.103 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.104199 | orchestrator | 08:32:31.103 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.104204 | orchestrator | 08:32:31.103 STDOUT terraform:  } 2025-02-10 08:32:31.104209 | orchestrator | 08:32:31.103 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-02-10 08:32:31.104220 | orchestrator | 08:32:31.103 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:31.104474 | orchestrator | 08:32:31.103 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.104489 | orchestrator | 08:32:31.103 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.104495 | orchestrator | 08:32:31.103 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.104500 | orchestrator | 08:32:31.103 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.104505 | orchestrator | 08:32:31.103 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.104510 | orchestrator | 08:32:31.103 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-02-10 08:32:31.104515 | orchestrator | 08:32:31.104 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.104520 | orchestrator | 08:32:31.104 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.104531 | orchestrator | 08:32:31.104 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.104537 | orchestrator | 08:32:31.104 STDOUT terraform:  } 2025-02-10 08:32:31.104542 | orchestrator | 08:32:31.104 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-02-10 08:32:31.104547 | orchestrator | 08:32:31.104 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-10 08:32:31.104552 | orchestrator | 08:32:31.104 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.104558 | orchestrator | 08:32:31.104 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.104565 | orchestrator | 08:32:31.104 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.104571 | orchestrator | 08:32:31.104 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.104576 | orchestrator | 08:32:31.104 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.104608 | orchestrator | 08:32:31.104 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-02-10 08:32:31.104614 | orchestrator | 08:32:31.104 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.104619 | orchestrator | 08:32:31.104 STDOUT terraform:  + size = 80 2025-02-10 08:32:31.104625 | orchestrator | 08:32:31.104 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.104630 | orchestrator | 08:32:31.104 STDOUT terraform:  } 2025-02-10 08:32:31.104635 | orchestrator | 08:32:31.104 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-02-10 08:32:31.104640 | orchestrator | 08:32:31.104 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.104645 | orchestrator | 08:32:31.104 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.104650 | orchestrator | 08:32:31.104 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.104659 | orchestrator | 08:32:31.104 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.105811 | orchestrator | 08:32:31.104 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.105824 | orchestrator | 08:32:31.104 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-02-10 08:32:31.105834 | orchestrator | 08:32:31.104 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.105840 | orchestrator | 08:32:31.104 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.105846 | orchestrator | 08:32:31.104 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.105851 | orchestrator | 08:32:31.104 STDOUT terraform:  } 2025-02-10 08:32:31.105858 | orchestrator | 08:32:31.104 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-02-10 08:32:31.105866 | orchestrator | 08:32:31.104 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.105874 | orchestrator | 08:32:31.104 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.105881 | orchestrator | 08:32:31.104 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.105889 | orchestrator | 08:32:31.104 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.105905 | orchestrator | 08:32:31.104 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.105911 | orchestrator | 08:32:31.104 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-02-10 08:32:31.105915 | orchestrator | 08:32:31.104 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.105921 | orchestrator | 08:32:31.104 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.105926 | orchestrator | 08:32:31.104 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.105931 | orchestrator | 08:32:31.105 STDOUT terraform:  } 2025-02-10 08:32:31.105937 | orchestrator | 08:32:31.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-02-10 08:32:31.105942 | orchestrator | 08:32:31.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.105950 | orchestrator | 08:32:31.105 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.105961 | orchestrator | 08:32:31.105 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.105969 | orchestrator | 08:32:31.105 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.105977 | orchestrator | 08:32:31.105 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.105984 | orchestrator | 08:32:31.105 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-02-10 08:32:31.105989 | orchestrator | 08:32:31.105 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.105994 | orchestrator | 08:32:31.105 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.106001 | orchestrator | 08:32:31.105 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.106006 | orchestrator | 08:32:31.105 STDOUT terraform:  } 2025-02-10 08:32:31.106011 | orchestrator | 08:32:31.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-02-10 08:32:31.106040 | orchestrator | 08:32:31.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.106046 | orchestrator | 08:32:31.105 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.106051 | orchestrator | 08:32:31.105 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.106056 | orchestrator | 08:32:31.105 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.106061 | orchestrator | 08:32:31.105 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.106066 | orchestrator | 08:32:31.105 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-02-10 08:32:31.106071 | orchestrator | 08:32:31.105 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.106076 | orchestrator | 08:32:31.105 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.106082 | orchestrator | 08:32:31.105 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.106087 | orchestrator | 08:32:31.105 STDOUT terraform:  } 2025-02-10 08:32:31.106096 | orchestrator | 08:32:31.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-02-10 08:32:31.106108 | orchestrator | 08:32:31.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108482 | orchestrator | 08:32:31.105 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108515 | orchestrator | 08:32:31.105 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108521 | orchestrator | 08:32:31.105 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108527 | orchestrator | 08:32:31.105 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108532 | orchestrator | 08:32:31.105 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-02-10 08:32:31.108537 | orchestrator | 08:32:31.105 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108543 | orchestrator | 08:32:31.105 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108548 | orchestrator | 08:32:31.105 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108554 | orchestrator | 08:32:31.105 STDOUT terraform:  } 2025-02-10 08:32:31.108559 | orchestrator | 08:32:31.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-02-10 08:32:31.108564 | orchestrator | 08:32:31.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108569 | orchestrator | 08:32:31.106 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108575 | orchestrator | 08:32:31.106 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108617 | orchestrator | 08:32:31.106 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108628 | orchestrator | 08:32:31.106 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108634 | orchestrator | 08:32:31.106 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-02-10 08:32:31.108639 | orchestrator | 08:32:31.106 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108644 | orchestrator | 08:32:31.106 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108650 | orchestrator | 08:32:31.106 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108655 | orchestrator | 08:32:31.106 STDOUT terraform:  } 2025-02-10 08:32:31.108660 | orchestrator | 08:32:31.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-02-10 08:32:31.108665 | orchestrator | 08:32:31.106 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108670 | orchestrator | 08:32:31.106 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108675 | orchestrator | 08:32:31.106 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108680 | orchestrator | 08:32:31.106 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108685 | orchestrator | 08:32:31.106 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108690 | orchestrator | 08:32:31.106 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-02-10 08:32:31.108695 | orchestrator | 08:32:31.106 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108700 | orchestrator | 08:32:31.106 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108705 | orchestrator | 08:32:31.106 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108720 | orchestrator | 08:32:31.106 STDOUT terraform:  } 2025-02-10 08:32:31.108725 | orchestrator | 08:32:31.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-02-10 08:32:31.108730 | orchestrator | 08:32:31.106 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108734 | orchestrator | 08:32:31.106 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108739 | orchestrator | 08:32:31.106 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108744 | orchestrator | 08:32:31.106 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108749 | orchestrator | 08:32:31.106 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108754 | orchestrator | 08:32:31.106 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-02-10 08:32:31.108759 | orchestrator | 08:32:31.106 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108765 | orchestrator | 08:32:31.106 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108770 | orchestrator | 08:32:31.106 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108775 | orchestrator | 08:32:31.106 STDOUT terraform:  } 2025-02-10 08:32:31.108781 | orchestrator | 08:32:31.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-02-10 08:32:31.108786 | orchestrator | 08:32:31.106 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108791 | orchestrator | 08:32:31.106 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108799 | orchestrator | 08:32:31.107 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108805 | orchestrator | 08:32:31.107 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108810 | orchestrator | 08:32:31.107 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108815 | orchestrator | 08:32:31.107 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-02-10 08:32:31.108820 | orchestrator | 08:32:31.107 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108829 | orchestrator | 08:32:31.107 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108834 | orchestrator | 08:32:31.107 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108839 | orchestrator | 08:32:31.107 STDOUT terraform:  } 2025-02-10 08:32:31.108844 | orchestrator | 08:32:31.107 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-02-10 08:32:31.108849 | orchestrator | 08:32:31.107 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108854 | orchestrator | 08:32:31.107 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108859 | orchestrator | 08:32:31.107 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108863 | orchestrator | 08:32:31.107 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108868 | orchestrator | 08:32:31.107 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108873 | orchestrator | 08:32:31.107 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-02-10 08:32:31.108886 | orchestrator | 08:32:31.107 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108891 | orchestrator | 08:32:31.107 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108895 | orchestrator | 08:32:31.107 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108900 | orchestrator | 08:32:31.107 STDOUT terraform:  } 2025-02-10 08:32:31.108905 | orchestrator | 08:32:31.107 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-02-10 08:32:31.108910 | orchestrator | 08:32:31.107 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108915 | orchestrator | 08:32:31.107 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108920 | orchestrator | 08:32:31.107 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108925 | orchestrator | 08:32:31.107 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108931 | orchestrator | 08:32:31.107 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108936 | orchestrator | 08:32:31.107 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-02-10 08:32:31.108941 | orchestrator | 08:32:31.107 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108946 | orchestrator | 08:32:31.107 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.108951 | orchestrator | 08:32:31.107 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.108955 | orchestrator | 08:32:31.107 STDOUT terraform:  } 2025-02-10 08:32:31.108960 | orchestrator | 08:32:31.107 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-02-10 08:32:31.108965 | orchestrator | 08:32:31.107 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.108970 | orchestrator | 08:32:31.107 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.108975 | orchestrator | 08:32:31.107 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.108980 | orchestrator | 08:32:31.107 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.108985 | orchestrator | 08:32:31.107 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.108990 | orchestrator | 08:32:31.107 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-02-10 08:32:31.108994 | orchestrator | 08:32:31.108 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.108999 | orchestrator | 08:32:31.108 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.109004 | orchestrator | 08:32:31.108 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.109009 | orchestrator | 08:32:31.108 STDOUT terraform:  } 2025-02-10 08:32:31.109014 | orchestrator | 08:32:31.108 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-02-10 08:32:31.109019 | orchestrator | 08:32:31.108 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.109026 | orchestrator | 08:32:31.108 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.110104 | orchestrator | 08:32:31.108 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110117 | orchestrator | 08:32:31.108 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.110123 | orchestrator | 08:32:31.108 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.110129 | orchestrator | 08:32:31.108 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-02-10 08:32:31.110134 | orchestrator | 08:32:31.108 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.110139 | orchestrator | 08:32:31.108 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.110144 | orchestrator | 08:32:31.108 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.110149 | orchestrator | 08:32:31.108 STDOUT terraform:  } 2025-02-10 08:32:31.110154 | orchestrator | 08:32:31.108 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-02-10 08:32:31.110159 | orchestrator | 08:32:31.108 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.110164 | orchestrator | 08:32:31.108 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.110169 | orchestrator | 08:32:31.108 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110174 | orchestrator | 08:32:31.108 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.110179 | orchestrator | 08:32:31.108 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.110184 | orchestrator | 08:32:31.108 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-02-10 08:32:31.110189 | orchestrator | 08:32:31.108 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.110194 | orchestrator | 08:32:31.108 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.110199 | orchestrator | 08:32:31.108 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.110204 | orchestrator | 08:32:31.108 STDOUT terraform:  } 2025-02-10 08:32:31.110209 | orchestrator | 08:32:31.108 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-02-10 08:32:31.110214 | orchestrator | 08:32:31.108 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.110219 | orchestrator | 08:32:31.108 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.110224 | orchestrator | 08:32:31.108 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110229 | orchestrator | 08:32:31.108 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.110234 | orchestrator | 08:32:31.108 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.110239 | orchestrator | 08:32:31.108 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-02-10 08:32:31.110243 | orchestrator | 08:32:31.108 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.110248 | orchestrator | 08:32:31.108 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.110253 | orchestrator | 08:32:31.108 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.110258 | orchestrator | 08:32:31.108 STDOUT terraform:  } 2025-02-10 08:32:31.110263 | orchestrator | 08:32:31.108 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-02-10 08:32:31.110273 | orchestrator | 08:32:31.108 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.110278 | orchestrator | 08:32:31.108 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.110288 | orchestrator | 08:32:31.109 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110293 | orchestrator | 08:32:31.109 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.110300 | orchestrator | 08:32:31.109 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.110305 | orchestrator | 08:32:31.109 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-02-10 08:32:31.110310 | orchestrator | 08:32:31.109 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.110315 | orchestrator | 08:32:31.109 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.110320 | orchestrator | 08:32:31.109 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.110325 | orchestrator | 08:32:31.109 STDOUT terraform:  } 2025-02-10 08:32:31.110330 | orchestrator | 08:32:31.109 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-02-10 08:32:31.110335 | orchestrator | 08:32:31.109 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.110339 | orchestrator | 08:32:31.109 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.110344 | orchestrator | 08:32:31.109 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110350 | orchestrator | 08:32:31.109 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.110354 | orchestrator | 08:32:31.109 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.110359 | orchestrator | 08:32:31.109 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-02-10 08:32:31.110364 | orchestrator | 08:32:31.109 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.110369 | orchestrator | 08:32:31.109 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.110374 | orchestrator | 08:32:31.109 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.110379 | orchestrator | 08:32:31.109 STDOUT terraform:  } 2025-02-10 08:32:31.110383 | orchestrator | 08:32:31.109 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-02-10 08:32:31.110388 | orchestrator | 08:32:31.109 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-10 08:32:31.110393 | orchestrator | 08:32:31.109 STDOUT terraform:  + attachment = (known after apply) 2025-02-10 08:32:31.110398 | orchestrator | 08:32:31.109 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110403 | orchestrator | 08:32:31.109 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.110408 | orchestrator | 08:32:31.109 STDOUT terraform:  + metadata = (known after apply) 2025-02-10 08:32:31.110413 | orchestrator | 08:32:31.109 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-02-10 08:32:31.110417 | orchestrator | 08:32:31.109 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.110426 | orchestrator | 08:32:31.109 STDOUT terraform:  + size = 20 2025-02-10 08:32:31.110431 | orchestrator | 08:32:31.109 STDOUT terraform:  + volume_type = "ssd" 2025-02-10 08:32:31.110436 | orchestrator | 08:32:31.109 STDOUT terraform:  } 2025-02-10 08:32:31.110442 | orchestrator | 08:32:31.109 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-02-10 08:32:31.110447 | orchestrator | 08:32:31.109 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-02-10 08:32:31.110452 | orchestrator | 08:32:31.109 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.110456 | orchestrator | 08:32:31.109 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.110461 | orchestrator | 08:32:31.109 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.110466 | orchestrator | 08:32:31.109 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.110471 | orchestrator | 08:32:31.109 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.110479 | orchestrator | 08:32:31.109 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.112326 | orchestrator | 08:32:31.109 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.112426 | orchestrator | 08:32:31.111 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.112436 | orchestrator | 08:32:31.111 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-02-10 08:32:31.112444 | orchestrator | 08:32:31.111 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.112452 | orchestrator | 08:32:31.111 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.112458 | orchestrator | 08:32:31.111 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.112465 | orchestrator | 08:32:31.111 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.112471 | orchestrator | 08:32:31.111 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.112478 | orchestrator | 08:32:31.111 STDOUT terraform:  + name = "testbed-manager" 2025-02-10 08:32:31.112485 | orchestrator | 08:32:31.111 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.112491 | orchestrator | 08:32:31.111 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.112496 | orchestrator | 08:32:31.111 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.112502 | orchestrator | 08:32:31.111 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.112508 | orchestrator | 08:32:31.111 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.112523 | orchestrator | 08:32:31.111 STDOUT terraform:  + user_data = (known after apply) 2025-02-10 08:32:31.112529 | orchestrator | 08:32:31.111 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.112535 | orchestrator | 08:32:31.111 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.112541 | orchestrator | 08:32:31.111 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.112548 | orchestrator | 08:32:31.111 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.112572 | orchestrator | 08:32:31.111 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.112602 | orchestrator | 08:32:31.111 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.112613 | orchestrator | 08:32:31.111 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.112629 | orchestrator | 08:32:31.111 STDOUT terraform:  } 2025-02-10 08:32:31.112639 | orchestrator | 08:32:31.111 STDOUT terraform:  + network { 2025-02-10 08:32:31.112648 | orchestrator | 08:32:31.111 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.112655 | orchestrator | 08:32:31.111 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.112661 | orchestrator | 08:32:31.111 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.112667 | orchestrator | 08:32:31.111 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.112673 | orchestrator | 08:32:31.111 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.112679 | orchestrator | 08:32:31.111 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.112687 | orchestrator | 08:32:31.111 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.112693 | orchestrator | 08:32:31.111 STDOUT terraform:  } 2025-02-10 08:32:31.112700 | orchestrator | 08:32:31.112 STDOUT terraform:  } 2025-02-10 08:32:31.112706 | orchestrator | 08:32:31.112 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-02-10 08:32:31.112712 | orchestrator | 08:32:31.112 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:31.112718 | orchestrator | 08:32:31.112 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.112724 | orchestrator | 08:32:31.112 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.112736 | orchestrator | 08:32:31.112 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.112800 | orchestrator | 08:32:31.112 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.112809 | orchestrator | 08:32:31.112 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.112815 | orchestrator | 08:32:31.112 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.112821 | orchestrator | 08:32:31.112 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.112828 | orchestrator | 08:32:31.112 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.112833 | orchestrator | 08:32:31.112 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:31.112839 | orchestrator | 08:32:31.112 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.112845 | orchestrator | 08:32:31.112 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.112851 | orchestrator | 08:32:31.112 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.112857 | orchestrator | 08:32:31.112 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.112863 | orchestrator | 08:32:31.112 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.112875 | orchestrator | 08:32:31.112 STDOUT terraform:  + name = "testbed-node-0" 2025-02-10 08:32:31.112882 | orchestrator | 08:32:31.112 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.112888 | orchestrator | 08:32:31.112 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.112897 | orchestrator | 08:32:31.112 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.112904 | orchestrator | 08:32:31.112 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.112910 | orchestrator | 08:32:31.112 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.112916 | orchestrator | 08:32:31.112 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:31.112923 | orchestrator | 08:32:31.112 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.112934 | orchestrator | 08:32:31.112 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.112962 | orchestrator | 08:32:31.112 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.112972 | orchestrator | 08:32:31.112 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.112982 | orchestrator | 08:32:31.112 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.113025 | orchestrator | 08:32:31.112 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.113062 | orchestrator | 08:32:31.113 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.113071 | orchestrator | 08:32:31.113 STDOUT terraform:  } 2025-02-10 08:32:31.113080 | orchestrator | 08:32:31.113 STDOUT terraform:  + network { 2025-02-10 08:32:31.113090 | orchestrator | 08:32:31.113 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.113133 | orchestrator | 08:32:31.113 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.113162 | orchestrator | 08:32:31.113 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.113202 | orchestrator | 08:32:31.113 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.113220 | orchestrator | 08:32:31.113 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.113262 | orchestrator | 08:32:31.113 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.113279 | orchestrator | 08:32:31.113 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.113291 | orchestrator | 08:32:31.113 STDOUT terraform:  } 2025-02-10 08:32:31.113306 | orchestrator | 08:32:31.113 STDOUT terraform:  } 2025-02-10 08:32:31.113377 | orchestrator | 08:32:31.113 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-02-10 08:32:31.113618 | orchestrator | 08:32:31.113 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:31.113719 | orchestrator | 08:32:31.113 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.113739 | orchestrator | 08:32:31.113 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.113753 | orchestrator | 08:32:31.113 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.113793 | orchestrator | 08:32:31.113 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.113812 | orchestrator | 08:32:31.113 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.113825 | orchestrator | 08:32:31.113 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.113838 | orchestrator | 08:32:31.113 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.113852 | orchestrator | 08:32:31.113 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.113876 | orchestrator | 08:32:31.113 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:31.113890 | orchestrator | 08:32:31.113 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.113903 | orchestrator | 08:32:31.113 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.113920 | orchestrator | 08:32:31.113 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.113966 | orchestrator | 08:32:31.113 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.113984 | orchestrator | 08:32:31.113 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.113999 | orchestrator | 08:32:31.113 STDOUT terraform:  + name = "testbed-node-1" 2025-02-10 08:32:31.114012 | orchestrator | 08:32:31.113 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.114066 | orchestrator | 08:32:31.113 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.114121 | orchestrator | 08:32:31.113 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.114138 | orchestrator | 08:32:31.113 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.114151 | orchestrator | 08:32:31.114 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.114171 | orchestrator | 08:32:31.114 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:31.115916 | orchestrator | 08:32:31.114 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.115971 | orchestrator | 08:32:31.114 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.115979 | orchestrator | 08:32:31.114 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.115994 | orchestrator | 08:32:31.114 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.115999 | orchestrator | 08:32:31.114 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.116005 | orchestrator | 08:32:31.114 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.116010 | orchestrator | 08:32:31.114 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.116016 | orchestrator | 08:32:31.114 STDOUT terraform:  } 2025-02-10 08:32:31.116021 | orchestrator | 08:32:31.114 STDOUT terraform:  + network { 2025-02-10 08:32:31.116026 | orchestrator | 08:32:31.114 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.116032 | orchestrator | 08:32:31.114 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.116037 | orchestrator | 08:32:31.114 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.116050 | orchestrator | 08:32:31.114 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.116055 | orchestrator | 08:32:31.114 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.116060 | orchestrator | 08:32:31.114 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.116065 | orchestrator | 08:32:31.114 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.116070 | orchestrator | 08:32:31.114 STDOUT terraform:  } 2025-02-10 08:32:31.116075 | orchestrator | 08:32:31.114 STDOUT terraform:  } 2025-02-10 08:32:31.116081 | orchestrator | 08:32:31.114 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-02-10 08:32:31.116085 | orchestrator | 08:32:31.114 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:31.116096 | orchestrator | 08:32:31.114 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.116101 | orchestrator | 08:32:31.114 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.116111 | orchestrator | 08:32:31.114 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.116117 | orchestrator | 08:32:31.114 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.116122 | orchestrator | 08:32:31.114 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.116127 | orchestrator | 08:32:31.114 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.116132 | orchestrator | 08:32:31.114 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.116137 | orchestrator | 08:32:31.114 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.116142 | orchestrator | 08:32:31.114 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:31.116147 | orchestrator | 08:32:31.114 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.116152 | orchestrator | 08:32:31.114 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.116156 | orchestrator | 08:32:31.114 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.116161 | orchestrator | 08:32:31.114 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.116166 | orchestrator | 08:32:31.114 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.116171 | orchestrator | 08:32:31.115 STDOUT terraform:  + name = "testbed-node-2" 2025-02-10 08:32:31.116176 | orchestrator | 08:32:31.115 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.116181 | orchestrator | 08:32:31.115 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.116186 | orchestrator | 08:32:31.115 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.116191 | orchestrator | 08:32:31.115 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.116196 | orchestrator | 08:32:31.115 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.116206 | orchestrator | 08:32:31.115 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:31.116253 | orchestrator | 08:32:31.115 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.116263 | orchestrator | 08:32:31.115 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.116268 | orchestrator | 08:32:31.115 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.116273 | orchestrator | 08:32:31.115 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.116278 | orchestrator | 08:32:31.115 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.116283 | orchestrator | 08:32:31.115 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.116287 | orchestrator | 08:32:31.115 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.116292 | orchestrator | 08:32:31.115 STDOUT terraform:  } 2025-02-10 08:32:31.116297 | orchestrator | 08:32:31.115 STDOUT terraform:  + network { 2025-02-10 08:32:31.116302 | orchestrator | 08:32:31.115 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.116307 | orchestrator | 08:32:31.115 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.116312 | orchestrator | 08:32:31.115 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.116317 | orchestrator | 08:32:31.115 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.116321 | orchestrator | 08:32:31.115 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.116326 | orchestrator | 08:32:31.115 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.116331 | orchestrator | 08:32:31.115 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.116336 | orchestrator | 08:32:31.115 STDOUT terraform:  } 2025-02-10 08:32:31.116341 | orchestrator | 08:32:31.115 STDOUT terraform:  } 2025-02-10 08:32:31.116346 | orchestrator | 08:32:31.115 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-02-10 08:32:31.116351 | orchestrator | 08:32:31.115 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:31.116356 | orchestrator | 08:32:31.115 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.116360 | orchestrator | 08:32:31.115 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.116365 | orchestrator | 08:32:31.115 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.116370 | orchestrator | 08:32:31.115 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.116375 | orchestrator | 08:32:31.115 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.116380 | orchestrator | 08:32:31.115 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.116385 | orchestrator | 08:32:31.116 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.116390 | orchestrator | 08:32:31.116 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.116404 | orchestrator | 08:32:31.116 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:31.116409 | orchestrator | 08:32:31.116 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.116413 | orchestrator | 08:32:31.116 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.116424 | orchestrator | 08:32:31.116 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.116429 | orchestrator | 08:32:31.116 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.116436 | orchestrator | 08:32:31.116 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.116442 | orchestrator | 08:32:31.116 STDOUT terraform:  + name = "testbed-node-3" 2025-02-10 08:32:31.116446 | orchestrator | 08:32:31.116 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.116451 | orchestrator | 08:32:31.116 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.116456 | orchestrator | 08:32:31.116 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.116461 | orchestrator | 08:32:31.116 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.116466 | orchestrator | 08:32:31.116 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.116471 | orchestrator | 08:32:31.116 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:31.116478 | orchestrator | 08:32:31.116 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.116483 | orchestrator | 08:32:31.116 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.116490 | orchestrator | 08:32:31.116 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.116523 | orchestrator | 08:32:31.116 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.116550 | orchestrator | 08:32:31.116 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.116578 | orchestrator | 08:32:31.116 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.116646 | orchestrator | 08:32:31.116 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.116665 | orchestrator | 08:32:31.116 STDOUT terraform:  } 2025-02-10 08:32:31.116683 | orchestrator | 08:32:31.116 STDOUT terraform:  + network { 2025-02-10 08:32:31.116720 | orchestrator | 08:32:31.116 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.116771 | orchestrator | 08:32:31.116 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.116829 | orchestrator | 08:32:31.116 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.116893 | orchestrator | 08:32:31.116 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.116948 | orchestrator | 08:32:31.116 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.117002 | orchestrator | 08:32:31.116 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.117052 | orchestrator | 08:32:31.116 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.117077 | orchestrator | 08:32:31.117 STDOUT terraform:  } 2025-02-10 08:32:31.117085 | orchestrator | 08:32:31.117 STDOUT terraform:  } 2025-02-10 08:32:31.117136 | orchestrator | 08:32:31.117 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-02-10 08:32:31.117180 | orchestrator | 08:32:31.117 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:31.117216 | orchestrator | 08:32:31.117 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.117251 | orchestrator | 08:32:31.117 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.117285 | orchestrator | 08:32:31.117 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.117321 | orchestrator | 08:32:31.117 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.117346 | orchestrator | 08:32:31.117 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.117364 | orchestrator | 08:32:31.117 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.117400 | orchestrator | 08:32:31.117 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.117434 | orchestrator | 08:32:31.117 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.117463 | orchestrator | 08:32:31.117 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:31.117487 | orchestrator | 08:32:31.117 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.117533 | orchestrator | 08:32:31.117 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.117568 | orchestrator | 08:32:31.117 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.126329 | orchestrator | 08:32:31.117 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.126491 | orchestrator | 08:32:31.126 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.126540 | orchestrator | 08:32:31.126 STDOUT terraform:  + name = "testbed-node-4" 2025-02-10 08:32:31.126607 | orchestrator | 08:32:31.126 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.126657 | orchestrator | 08:32:31.126 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.126705 | orchestrator | 08:32:31.126 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.126740 | orchestrator | 08:32:31.126 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.126784 | orchestrator | 08:32:31.126 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.126847 | orchestrator | 08:32:31.126 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:31.126874 | orchestrator | 08:32:31.126 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.126907 | orchestrator | 08:32:31.126 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.126943 | orchestrator | 08:32:31.126 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.126981 | orchestrator | 08:32:31.126 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.127019 | orchestrator | 08:32:31.126 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.127057 | orchestrator | 08:32:31.127 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.127112 | orchestrator | 08:32:31.127 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.127136 | orchestrator | 08:32:31.127 STDOUT terraform:  } 2025-02-10 08:32:31.127165 | orchestrator | 08:32:31.127 STDOUT terraform:  + network { 2025-02-10 08:32:31.127205 | orchestrator | 08:32:31.127 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.127269 | orchestrator | 08:32:31.127 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.127310 | orchestrator | 08:32:31.127 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.127349 | orchestrator | 08:32:31.127 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.127388 | orchestrator | 08:32:31.127 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.127429 | orchestrator | 08:32:31.127 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.127470 | orchestrator | 08:32:31.127 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.127492 | orchestrator | 08:32:31.127 STDOUT terraform:  } 2025-02-10 08:32:31.127515 | orchestrator | 08:32:31.127 STDOUT terraform:  } 2025-02-10 08:32:31.127604 | orchestrator | 08:32:31.127 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-02-10 08:32:31.127672 | orchestrator | 08:32:31.127 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-10 08:32:31.127717 | orchestrator | 08:32:31.127 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-10 08:32:31.127761 | orchestrator | 08:32:31.127 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-10 08:32:31.127803 | orchestrator | 08:32:31.127 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-10 08:32:31.127845 | orchestrator | 08:32:31.127 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.127879 | orchestrator | 08:32:31.127 STDOUT terraform:  + availability_zone = "nova" 2025-02-10 08:32:31.127909 | orchestrator | 08:32:31.127 STDOUT terraform:  + config_drive = true 2025-02-10 08:32:31.127954 | orchestrator | 08:32:31.127 STDOUT terraform:  + created = (known after apply) 2025-02-10 08:32:31.127997 | orchestrator | 08:32:31.127 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-10 08:32:31.128094 | orchestrator | 08:32:31.128 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-10 08:32:31.128132 | orchestrator | 08:32:31.128 STDOUT terraform:  + force_delete = false 2025-02-10 08:32:31.128178 | orchestrator | 08:32:31.128 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.128221 | orchestrator | 08:32:31.128 STDOUT terraform:  + image_id = (known after apply) 2025-02-10 08:32:31.128266 | orchestrator | 08:32:31.128 STDOUT terraform:  + image_name = (known after apply) 2025-02-10 08:32:31.128300 | orchestrator | 08:32:31.128 STDOUT terraform:  + key_pair = "testbed" 2025-02-10 08:32:31.128339 | orchestrator | 08:32:31.128 STDOUT terraform:  + name = "testbed-node-5" 2025-02-10 08:32:31.128371 | orchestrator | 08:32:31.128 STDOUT terraform:  + power_state = "active" 2025-02-10 08:32:31.128447 | orchestrator | 08:32:31.128 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.128506 | orchestrator | 08:32:31.128 STDOUT terraform:  + security_groups = (known after apply) 2025-02-10 08:32:31.128538 | orchestrator | 08:32:31.128 STDOUT terraform:  + stop_before_destroy = false 2025-02-10 08:32:31.128687 | orchestrator | 08:32:31.128 STDOUT terraform:  + updated = (known after apply) 2025-02-10 08:32:31.128772 | orchestrator | 08:32:31.128 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-10 08:32:31.128801 | orchestrator | 08:32:31.128 STDOUT terraform:  + block_device { 2025-02-10 08:32:31.128835 | orchestrator | 08:32:31.128 STDOUT terraform:  + boot_index = 0 2025-02-10 08:32:31.128872 | orchestrator | 08:32:31.128 STDOUT terraform:  + delete_on_termination = false 2025-02-10 08:32:31.128910 | orchestrator | 08:32:31.128 STDOUT terraform:  + destination_type = "volume" 2025-02-10 08:32:31.128948 | orchestrator | 08:32:31.128 STDOUT terraform:  + multiattach = false 2025-02-10 08:32:31.128988 | orchestrator | 08:32:31.128 STDOUT terraform:  + source_type = "volume" 2025-02-10 08:32:31.129034 | orchestrator | 08:32:31.128 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.129059 | orchestrator | 08:32:31.129 STDOUT terraform:  } 2025-02-10 08:32:31.129082 | orchestrator | 08:32:31.129 STDOUT terraform:  + network { 2025-02-10 08:32:31.129110 | orchestrator | 08:32:31.129 STDOUT terraform:  + access_network = false 2025-02-10 08:32:31.129148 | orchestrator | 08:32:31.129 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-10 08:32:31.129186 | orchestrator | 08:32:31.129 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-10 08:32:31.129225 | orchestrator | 08:32:31.129 STDOUT terraform:  + mac = (known after apply) 2025-02-10 08:32:31.129263 | orchestrator | 08:32:31.129 STDOUT terraform:  + name = (known after apply) 2025-02-10 08:32:31.129302 | orchestrator | 08:32:31.129 STDOUT terraform:  + port = (known after apply) 2025-02-10 08:32:31.129340 | orchestrator | 08:32:31.129 STDOUT terraform:  + uuid = (known after apply) 2025-02-10 08:32:31.129363 | orchestrator | 08:32:31.129 STDOUT terraform:  } 2025-02-10 08:32:31.129386 | orchestrator | 08:32:31.129 STDOUT terraform:  } 2025-02-10 08:32:31.129433 | orchestrator | 08:32:31.129 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-02-10 08:32:31.129475 | orchestrator | 08:32:31.129 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-02-10 08:32:31.129511 | orchestrator | 08:32:31.129 STDOUT terraform:  + fingerprint = (known after apply) 2025-02-10 08:32:31.129547 | orchestrator | 08:32:31.129 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.129575 | orchestrator | 08:32:31.129 STDOUT terraform:  + name = "testbed" 2025-02-10 08:32:31.129644 | orchestrator | 08:32:31.129 STDOUT terraform:  + private_key = (sensitive value) 2025-02-10 08:32:31.129681 | orchestrator | 08:32:31.129 STDOUT terraform:  + public_key = (known after apply) 2025-02-10 08:32:31.129719 | orchestrator | 08:32:31.129 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.129757 | orchestrator | 08:32:31.129 STDOUT terraform:  + user_id = (known after apply) 2025-02-10 08:32:31.129779 | orchestrator | 08:32:31.129 STDOUT terraform:  } 2025-02-10 08:32:31.129837 | orchestrator | 08:32:31.129 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-02-10 08:32:31.129899 | orchestrator | 08:32:31.129 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.129937 | orchestrator | 08:32:31.129 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.129973 | orchestrator | 08:32:31.129 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.130010 | orchestrator | 08:32:31.129 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.130080 | orchestrator | 08:32:31.130 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.130119 | orchestrator | 08:32:31.130 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.130141 | orchestrator | 08:32:31.130 STDOUT terraform:  } 2025-02-10 08:32:31.130199 | orchestrator | 08:32:31.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-02-10 08:32:31.130256 | orchestrator | 08:32:31.130 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.130292 | orchestrator | 08:32:31.130 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.130327 | orchestrator | 08:32:31.130 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.130362 | orchestrator | 08:32:31.130 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.130398 | orchestrator | 08:32:31.130 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.130438 | orchestrator | 08:32:31.130 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.130459 | orchestrator | 08:32:31.130 STDOUT terraform:  } 2025-02-10 08:32:31.130516 | orchestrator | 08:32:31.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-02-10 08:32:31.130577 | orchestrator | 08:32:31.130 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.130632 | orchestrator | 08:32:31.130 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.130671 | orchestrator | 08:32:31.130 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.130707 | orchestrator | 08:32:31.130 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.130744 | orchestrator | 08:32:31.130 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.130780 | orchestrator | 08:32:31.130 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.130801 | orchestrator | 08:32:31.130 STDOUT terraform:  } 2025-02-10 08:32:31.130858 | orchestrator | 08:32:31.130 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-02-10 08:32:31.130915 | orchestrator | 08:32:31.130 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.130952 | orchestrator | 08:32:31.130 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.130988 | orchestrator | 08:32:31.130 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.131023 | orchestrator | 08:32:31.130 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.131058 | orchestrator | 08:32:31.131 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.131101 | orchestrator | 08:32:31.131 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.131123 | orchestrator | 08:32:31.131 STDOUT terraform:  } 2025-02-10 08:32:31.131180 | orchestrator | 08:32:31.131 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-02-10 08:32:31.131237 | orchestrator | 08:32:31.131 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.131274 | orchestrator | 08:32:31.131 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.131311 | orchestrator | 08:32:31.131 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.131346 | orchestrator | 08:32:31.131 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.131381 | orchestrator | 08:32:31.131 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.131420 | orchestrator | 08:32:31.131 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.131441 | orchestrator | 08:32:31.131 STDOUT terraform:  } 2025-02-10 08:32:31.131500 | orchestrator | 08:32:31.131 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-02-10 08:32:31.131556 | orchestrator | 08:32:31.131 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.131611 | orchestrator | 08:32:31.131 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.131651 | orchestrator | 08:32:31.131 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.131686 | orchestrator | 08:32:31.131 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.131725 | orchestrator | 08:32:31.131 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.131764 | orchestrator | 08:32:31.131 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.131786 | orchestrator | 08:32:31.131 STDOUT terraform:  } 2025-02-10 08:32:31.131843 | orchestrator | 08:32:31.131 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-02-10 08:32:31.131903 | orchestrator | 08:32:31.131 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.131939 | orchestrator | 08:32:31.131 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.131979 | orchestrator | 08:32:31.131 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.132014 | orchestrator | 08:32:31.131 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.132050 | orchestrator | 08:32:31.132 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.132087 | orchestrator | 08:32:31.132 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.132109 | orchestrator | 08:32:31.132 STDOUT terraform:  } 2025-02-10 08:32:31.132165 | orchestrator | 08:32:31.132 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-02-10 08:32:31.132221 | orchestrator | 08:32:31.132 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.132257 | orchestrator | 08:32:31.132 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.132299 | orchestrator | 08:32:31.132 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.132334 | orchestrator | 08:32:31.132 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.132381 | orchestrator | 08:32:31.132 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.132417 | orchestrator | 08:32:31.132 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.132439 | orchestrator | 08:32:31.132 STDOUT terraform:  } 2025-02-10 08:32:31.132496 | orchestrator | 08:32:31.132 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-02-10 08:32:31.132551 | orchestrator | 08:32:31.132 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.132605 | orchestrator | 08:32:31.132 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.132646 | orchestrator | 08:32:31.132 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.132683 | orchestrator | 08:32:31.132 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.132721 | orchestrator | 08:32:31.132 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.132757 | orchestrator | 08:32:31.132 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.132780 | orchestrator | 08:32:31.132 STDOUT terraform:  } 2025-02-10 08:32:31.132839 | orchestrator | 08:32:31.132 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-02-10 08:32:31.132899 | orchestrator | 08:32:31.132 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.132937 | orchestrator | 08:32:31.132 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.132974 | orchestrator | 08:32:31.132 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.133011 | orchestrator | 08:32:31.132 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.133048 | orchestrator | 08:32:31.133 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.133084 | orchestrator | 08:32:31.133 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.133107 | orchestrator | 08:32:31.133 STDOUT terraform:  } 2025-02-10 08:32:31.133167 | orchestrator | 08:32:31.133 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-02-10 08:32:31.133224 | orchestrator | 08:32:31.133 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.133263 | orchestrator | 08:32:31.133 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.133300 | orchestrator | 08:32:31.133 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.133336 | orchestrator | 08:32:31.133 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.133374 | orchestrator | 08:32:31.133 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.133413 | orchestrator | 08:32:31.133 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.133436 | orchestrator | 08:32:31.133 STDOUT terraform:  } 2025-02-10 08:32:31.133494 | orchestrator | 08:32:31.133 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-02-10 08:32:31.133556 | orchestrator | 08:32:31.133 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.133614 | orchestrator | 08:32:31.133 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.133662 | orchestrator | 08:32:31.133 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.133698 | orchestrator | 08:32:31.133 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.133735 | orchestrator | 08:32:31.133 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.133790 | orchestrator | 08:32:31.133 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.133813 | orchestrator | 08:32:31.133 STDOUT terraform:  } 2025-02-10 08:32:31.133878 | orchestrator | 08:32:31.133 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-02-10 08:32:31.133936 | orchestrator | 08:32:31.133 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.133971 | orchestrator | 08:32:31.133 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.134010 | orchestrator | 08:32:31.133 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.134080 | orchestrator | 08:32:31.134 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.134118 | orchestrator | 08:32:31.134 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.134155 | orchestrator | 08:32:31.134 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.134182 | orchestrator | 08:32:31.134 STDOUT terraform:  } 2025-02-10 08:32:31.134241 | orchestrator | 08:32:31.134 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-02-10 08:32:31.134306 | orchestrator | 08:32:31.134 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.134344 | orchestrator | 08:32:31.134 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.134387 | orchestrator | 08:32:31.134 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.134425 | orchestrator | 08:32:31.134 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.134462 | orchestrator | 08:32:31.134 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.134497 | orchestrator | 08:32:31.134 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.134519 | orchestrator | 08:32:31.134 STDOUT terraform:  } 2025-02-10 08:32:31.134576 | orchestrator | 08:32:31.134 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-02-10 08:32:31.134646 | orchestrator | 08:32:31.134 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.134684 | orchestrator | 08:32:31.134 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.134721 | orchestrator | 08:32:31.134 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.134756 | orchestrator | 08:32:31.134 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.134792 | orchestrator | 08:32:31.134 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.134833 | orchestrator | 08:32:31.134 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.134858 | orchestrator | 08:32:31.134 STDOUT terraform:  } 2025-02-10 08:32:31.134915 | orchestrator | 08:32:31.134 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-02-10 08:32:31.134972 | orchestrator | 08:32:31.134 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.135009 | orchestrator | 08:32:31.134 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.135046 | orchestrator | 08:32:31.135 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.135084 | orchestrator | 08:32:31.135 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.135120 | orchestrator | 08:32:31.135 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.135157 | orchestrator | 08:32:31.135 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.135178 | orchestrator | 08:32:31.135 STDOUT terraform:  } 2025-02-10 08:32:31.135236 | orchestrator | 08:32:31.135 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-02-10 08:32:31.135292 | orchestrator | 08:32:31.135 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.135327 | orchestrator | 08:32:31.135 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.135364 | orchestrator | 08:32:31.135 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.135401 | orchestrator | 08:32:31.135 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.135437 | orchestrator | 08:32:31.135 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.135475 | orchestrator | 08:32:31.135 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.135497 | orchestrator | 08:32:31.135 STDOUT terraform:  } 2025-02-10 08:32:31.135555 | orchestrator | 08:32:31.135 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-02-10 08:32:31.135653 | orchestrator | 08:32:31.135 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-10 08:32:31.135693 | orchestrator | 08:32:31.135 STDOUT terraform:  + device = (known after apply) 2025-02-10 08:32:31.135733 | orchestrator | 08:32:31.135 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.135769 | orchestrator | 08:32:31.135 STDOUT terraform:  + instance_id = (known after apply) 2025-02-10 08:32:31.135809 | orchestrator | 08:32:31.135 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.135845 | orchestrator | 08:32:31.135 STDOUT terraform:  + volume_id = (known after apply) 2025-02-10 08:32:31.135870 | orchestrator | 08:32:31.135 STDOUT terraform:  } 2025-02-10 08:32:31.135936 | orchestrator | 08:32:31.135 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-02-10 08:32:31.136006 | orchestrator | 08:32:31.135 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-02-10 08:32:31.136047 | orchestrator | 08:32:31.136 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-10 08:32:31.136092 | orchestrator | 08:32:31.136 STDOUT terraform:  + floating_ip = (known after apply) 2025-02-10 08:32:31.136160 | orchestrator | 08:32:31.136 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.136838 | orchestrator | 08:32:31.136 STDOUT terraform:  + port_id = (known after apply) 2025-02-10 08:32:31.136934 | orchestrator | 08:32:31.136 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.136943 | orchestrator | 08:32:31.136 STDOUT terraform:  } 2025-02-10 08:32:31.136949 | orchestrator | 08:32:31.136 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-02-10 08:32:31.136956 | orchestrator | 08:32:31.136 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-02-10 08:32:31.136962 | orchestrator | 08:32:31.136 STDOUT terraform:  + address = (known after apply) 2025-02-10 08:32:31.136968 | orchestrator | 08:32:31.136 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.136973 | orchestrator | 08:32:31.136 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-10 08:32:31.136978 | orchestrator | 08:32:31.136 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.136983 | orchestrator | 08:32:31.136 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-10 08:32:31.136988 | orchestrator | 08:32:31.136 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.136993 | orchestrator | 08:32:31.136 STDOUT terraform:  + pool = "public" 2025-02-10 08:32:31.136999 | orchestrator | 08:32:31.136 STDOUT terraform:  + port_id = (known after apply) 2025-02-10 08:32:31.137004 | orchestrator | 08:32:31.136 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.137010 | orchestrator | 08:32:31.136 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.137015 | orchestrator | 08:32:31.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.137020 | orchestrator | 08:32:31.136 STDOUT terraform:  } 2025-02-10 08:32:31.137029 | orchestrator | 08:32:31.136 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-02-10 08:32:31.137059 | orchestrator | 08:32:31.136 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-02-10 08:32:31.137065 | orchestrator | 08:32:31.136 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.137071 | orchestrator | 08:32:31.136 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.137076 | orchestrator | 08:32:31.136 STDOUT terraform:  + availability_zone_hints = [ 2025-02-10 08:32:31.137082 | orchestrator | 08:32:31.136 STDOUT terraform:  + "nova", 2025-02-10 08:32:31.137087 | orchestrator | 08:32:31.136 STDOUT terraform:  ] 2025-02-10 08:32:31.137092 | orchestrator | 08:32:31.136 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-10 08:32:31.137100 | orchestrator | 08:32:31.137 STDOUT terraform:  + external = (known after apply) 2025-02-10 08:32:31.137154 | orchestrator | 08:32:31.137 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.137164 | orchestrator | 08:32:31.137 STDOUT terraform:  + mtu = (known after apply) 2025-02-10 08:32:31.137203 | orchestrator | 08:32:31.137 STDOUT terraform:  + name = "net-testbed-management" 2025-02-10 08:32:31.137223 | orchestrator | 08:32:31.137 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.137231 | orchestrator | 08:32:31.137 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.137279 | orchestrator | 08:32:31.137 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.137331 | orchestrator | 08:32:31.137 STDOUT terraform:  + shared = (known after apply) 2025-02-10 08:32:31.137339 | orchestrator | 08:32:31.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.137390 | orchestrator | 08:32:31.137 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-02-10 08:32:31.137400 | orchestrator | 08:32:31.137 STDOUT terraform:  + segments (known after apply) 2025-02-10 08:32:31.137408 | orchestrator | 08:32:31.137 STDOUT terraform:  } 2025-02-10 08:32:31.137474 | orchestrator | 08:32:31.137 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-02-10 08:32:31.137529 | orchestrator | 08:32:31.137 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-02-10 08:32:31.137659 | orchestrator | 08:32:31.137 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.137696 | orchestrator | 08:32:31.137 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.137705 | orchestrator | 08:32:31.137 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.137713 | orchestrator | 08:32:31.137 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.137725 | orchestrator | 08:32:31.137 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.137776 | orchestrator | 08:32:31.137 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.137789 | orchestrator | 08:32:31.137 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.137797 | orchestrator | 08:32:31.137 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.137923 | orchestrator | 08:32:31.137 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.137965 | orchestrator | 08:32:31.137 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.137988 | orchestrator | 08:32:31.137 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.137995 | orchestrator | 08:32:31.137 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.138053 | orchestrator | 08:32:31.137 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.138100 | orchestrator | 08:32:31.138 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.138135 | orchestrator | 08:32:31.138 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.138185 | orchestrator | 08:32:31.138 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.138230 | orchestrator | 08:32:31.138 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.138265 | orchestrator | 08:32:31.138 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.138315 | orchestrator | 08:32:31.138 STDOUT terraform:  } 2025-02-10 08:32:31.138325 | orchestrator | 08:32:31.138 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.138333 | orchestrator | 08:32:31.138 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.138374 | orchestrator | 08:32:31.138 STDOUT terraform:  } 2025-02-10 08:32:31.138387 | orchestrator | 08:32:31.138 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.138429 | orchestrator | 08:32:31.138 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.138447 | orchestrator | 08:32:31.138 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-02-10 08:32:31.138481 | orchestrator | 08:32:31.138 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.138489 | orchestrator | 08:32:31.138 STDOUT terraform:  } 2025-02-10 08:32:31.138497 | orchestrator | 08:32:31.138 STDOUT terraform:  } 2025-02-10 08:32:31.138557 | orchestrator | 08:32:31.138 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-02-10 08:32:31.138619 | orchestrator | 08:32:31.138 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:31.138677 | orchestrator | 08:32:31.138 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.138707 | orchestrator | 08:32:31.138 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.138753 | orchestrator | 08:32:31.138 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.138779 | orchestrator | 08:32:31.138 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.138822 | orchestrator | 08:32:31.138 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.138860 | orchestrator | 08:32:31.138 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.138894 | orchestrator | 08:32:31.138 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.138938 | orchestrator | 08:32:31.138 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.138972 | orchestrator | 08:32:31.138 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.139027 | orchestrator | 08:32:31.138 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.139039 | orchestrator | 08:32:31.139 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.139093 | orchestrator | 08:32:31.139 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.139119 | orchestrator | 08:32:31.139 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.139160 | orchestrator | 08:32:31.139 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.139201 | orchestrator | 08:32:31.139 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.139236 | orchestrator | 08:32:31.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.139244 | orchestrator | 08:32:31.139 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.139286 | orchestrator | 08:32:31.139 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.139315 | orchestrator | 08:32:31.139 STDOUT terraform:  } 2025-02-10 08:32:31.139323 | orchestrator | 08:32:31.139 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.139363 | orchestrator | 08:32:31.139 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:31.139401 | orchestrator | 08:32:31.139 STDOUT terraform:  } 2025-02-10 08:32:31.139408 | orchestrator | 08:32:31.139 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.139415 | orchestrator | 08:32:31.139 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.139431 | orchestrator | 08:32:31.139 STDOUT terraform:  } 2025-02-10 08:32:31.139438 | orchestrator | 08:32:31.139 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.139445 | orchestrator | 08:32:31.139 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:31.139472 | orchestrator | 08:32:31.139 STDOUT terraform:  } 2025-02-10 08:32:31.139486 | orchestrator | 08:32:31.139 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.139493 | orchestrator | 08:32:31.139 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.139536 | orchestrator | 08:32:31.139 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-02-10 08:32:31.139545 | orchestrator | 08:32:31.139 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.139570 | orchestrator | 08:32:31.139 STDOUT terraform:  } 2025-02-10 08:32:31.139662 | orchestrator | 08:32:31.139 STDOUT terraform:  } 2025-02-10 08:32:31.139671 | orchestrator | 08:32:31.139 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-02-10 08:32:31.139724 | orchestrator | 08:32:31.139 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:31.139751 | orchestrator | 08:32:31.139 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.139805 | orchestrator | 08:32:31.139 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.139813 | orchestrator | 08:32:31.139 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.139875 | orchestrator | 08:32:31.139 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.139883 | orchestrator | 08:32:31.139 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.139930 | orchestrator | 08:32:31.139 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.139972 | orchestrator | 08:32:31.139 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.140003 | orchestrator | 08:32:31.139 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.140044 | orchestrator | 08:32:31.139 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.140085 | orchestrator | 08:32:31.140 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.140121 | orchestrator | 08:32:31.140 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.140160 | orchestrator | 08:32:31.140 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.140198 | orchestrator | 08:32:31.140 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.140239 | orchestrator | 08:32:31.140 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.140279 | orchestrator | 08:32:31.140 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.140318 | orchestrator | 08:32:31.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.140360 | orchestrator | 08:32:31.140 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.140373 | orchestrator | 08:32:31.140 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.140379 | orchestrator | 08:32:31.140 STDOUT terraform:  } 2025-02-10 08:32:31.140386 | orchestrator | 08:32:31.140 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.140427 | orchestrator | 08:32:31.140 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:31.140438 | orchestrator | 08:32:31.140 STDOUT terraform:  } 2025-02-10 08:32:31.140448 | orchestrator | 08:32:31.140 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.140477 | orchestrator | 08:32:31.140 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.140525 | orchestrator | 08:32:31.140 STDOUT terraform:  } 2025-02-10 08:32:31.140536 | orchestrator | 08:32:31.140 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.140565 | orchestrator | 08:32:31.140 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:31.140575 | orchestrator | 08:32:31.140 STDOUT terraform:  } 2025-02-10 08:32:31.140630 | orchestrator | 08:32:31.140 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.140638 | orchestrator | 08:32:31.140 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.140645 | orchestrator | 08:32:31.140 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-02-10 08:32:31.140653 | orchestrator | 08:32:31.140 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.140662 | orchestrator | 08:32:31.140 STDOUT terraform:  } 2025-02-10 08:32:31.140718 | orchestrator | 08:32:31.140 STDOUT terraform:  } 2025-02-10 08:32:31.140729 | orchestrator | 08:32:31.140 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-02-10 08:32:31.140740 | orchestrator | 08:32:31.140 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:31.140803 | orchestrator | 08:32:31.140 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.140815 | orchestrator | 08:32:31.140 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.140855 | orchestrator | 08:32:31.140 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.140878 | orchestrator | 08:32:31.140 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.140925 | orchestrator | 08:32:31.140 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.140980 | orchestrator | 08:32:31.140 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.140992 | orchestrator | 08:32:31.140 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.141033 | orchestrator | 08:32:31.140 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.141067 | orchestrator | 08:32:31.141 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.141106 | orchestrator | 08:32:31.141 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.141146 | orchestrator | 08:32:31.141 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.141178 | orchestrator | 08:32:31.141 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.141232 | orchestrator | 08:32:31.141 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.141261 | orchestrator | 08:32:31.141 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.141302 | orchestrator | 08:32:31.141 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.141331 | orchestrator | 08:32:31.141 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.141340 | orchestrator | 08:32:31.141 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.141378 | orchestrator | 08:32:31.141 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.141418 | orchestrator | 08:32:31.141 STDOUT terraform:  } 2025-02-10 08:32:31.141431 | orchestrator | 08:32:31.141 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.141437 | orchestrator | 08:32:31.141 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:31.141444 | orchestrator | 08:32:31.141 STDOUT terraform:  } 2025-02-10 08:32:31.141452 | orchestrator | 08:32:31.141 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.141492 | orchestrator | 08:32:31.141 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.141500 | orchestrator | 08:32:31.141 STDOUT terraform:  } 2025-02-10 08:32:31.141529 | orchestrator | 08:32:31.141 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.141561 | orchestrator | 08:32:31.141 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:31.141634 | orchestrator | 08:32:31.141 STDOUT terraform:  } 2025-02-10 08:32:31.141642 | orchestrator | 08:32:31.141 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.141650 | orchestrator | 08:32:31.141 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.141687 | orchestrator | 08:32:31.141 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-02-10 08:32:31.141715 | orchestrator | 08:32:31.141 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.141723 | orchestrator | 08:32:31.141 STDOUT terraform:  } 2025-02-10 08:32:31.141731 | orchestrator | 08:32:31.141 STDOUT terraform:  } 2025-02-10 08:32:31.141790 | orchestrator | 08:32:31.141 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-02-10 08:32:31.141833 | orchestrator | 08:32:31.141 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:31.145388 | orchestrator | 08:32:31.141 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.145421 | orchestrator | 08:32:31.141 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.145437 | orchestrator | 08:32:31.141 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.145444 | orchestrator | 08:32:31.141 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.145450 | orchestrator | 08:32:31.141 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.145456 | orchestrator | 08:32:31.141 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.145462 | orchestrator | 08:32:31.142 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.145467 | orchestrator | 08:32:31.142 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.145473 | orchestrator | 08:32:31.142 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.145478 | orchestrator | 08:32:31.142 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.145482 | orchestrator | 08:32:31.142 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.145488 | orchestrator | 08:32:31.142 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.145492 | orchestrator | 08:32:31.142 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.145498 | orchestrator | 08:32:31.142 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.145503 | orchestrator | 08:32:31.142 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.145507 | orchestrator | 08:32:31.142 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.145512 | orchestrator | 08:32:31.142 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145518 | orchestrator | 08:32:31.142 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.145523 | orchestrator | 08:32:31.142 STDOUT terraform:  } 2025-02-10 08:32:31.145529 | orchestrator | 08:32:31.142 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145534 | orchestrator | 08:32:31.142 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:31.145538 | orchestrator | 08:32:31.142 STDOUT terraform:  } 2025-02-10 08:32:31.145567 | orchestrator | 08:32:31.142 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145572 | orchestrator | 08:32:31.142 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.145577 | orchestrator | 08:32:31.142 STDOUT terraform:  } 2025-02-10 08:32:31.145620 | orchestrator | 08:32:31.142 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145625 | orchestrator | 08:32:31.142 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:31.145631 | orchestrator | 08:32:31.142 STDOUT terraform:  } 2025-02-10 08:32:31.145636 | orchestrator | 08:32:31.142 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.145642 | orchestrator | 08:32:31.142 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.145647 | orchestrator | 08:32:31.142 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-02-10 08:32:31.145651 | orchestrator | 08:32:31.142 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.145657 | orchestrator | 08:32:31.142 STDOUT terraform:  } 2025-02-10 08:32:31.145667 | orchestrator | 08:32:31.142 STDOUT terraform:  } 2025-02-10 08:32:31.145672 | orchestrator | 08:32:31.142 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-02-10 08:32:31.145678 | orchestrator | 08:32:31.142 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:31.145683 | orchestrator | 08:32:31.142 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.145688 | orchestrator | 08:32:31.142 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.145703 | orchestrator | 08:32:31.142 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.145708 | orchestrator | 08:32:31.142 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.145713 | orchestrator | 08:32:31.142 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.145719 | orchestrator | 08:32:31.142 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.145723 | orchestrator | 08:32:31.142 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.145731 | orchestrator | 08:32:31.142 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.145737 | orchestrator | 08:32:31.142 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.145742 | orchestrator | 08:32:31.143 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.145746 | orchestrator | 08:32:31.143 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.145751 | orchestrator | 08:32:31.143 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.145756 | orchestrator | 08:32:31.143 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.145761 | orchestrator | 08:32:31.143 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.145766 | orchestrator | 08:32:31.143 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.145771 | orchestrator | 08:32:31.143 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.145776 | orchestrator | 08:32:31.143 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145781 | orchestrator | 08:32:31.143 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.145787 | orchestrator | 08:32:31.143 STDOUT terraform:  } 2025-02-10 08:32:31.145792 | orchestrator | 08:32:31.143 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145796 | orchestrator | 08:32:31.143 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:31.145801 | orchestrator | 08:32:31.143 STDOUT terraform:  } 2025-02-10 08:32:31.145807 | orchestrator | 08:32:31.143 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145817 | orchestrator | 08:32:31.143 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.145822 | orchestrator | 08:32:31.143 STDOUT terraform:  } 2025-02-10 08:32:31.145827 | orchestrator | 08:32:31.143 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145833 | orchestrator | 08:32:31.143 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:31.145841 | orchestrator | 08:32:31.143 STDOUT terraform:  } 2025-02-10 08:32:31.145846 | orchestrator | 08:32:31.143 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.145851 | orchestrator | 08:32:31.143 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.145856 | orchestrator | 08:32:31.143 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-02-10 08:32:31.145862 | orchestrator | 08:32:31.143 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.145867 | orchestrator | 08:32:31.143 STDOUT terraform:  } 2025-02-10 08:32:31.145872 | orchestrator | 08:32:31.143 STDOUT terraform:  } 2025-02-10 08:32:31.145877 | orchestrator | 08:32:31.143 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-02-10 08:32:31.145882 | orchestrator | 08:32:31.143 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-10 08:32:31.145887 | orchestrator | 08:32:31.143 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.145892 | orchestrator | 08:32:31.143 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-10 08:32:31.145897 | orchestrator | 08:32:31.143 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-10 08:32:31.145902 | orchestrator | 08:32:31.143 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.145907 | orchestrator | 08:32:31.143 STDOUT terraform:  + device_id = (known after apply) 2025-02-10 08:32:31.145919 | orchestrator | 08:32:31.143 STDOUT terraform:  + device_owner = (known after apply) 2025-02-10 08:32:31.145924 | orchestrator | 08:32:31.143 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-10 08:32:31.145929 | orchestrator | 08:32:31.143 STDOUT terraform:  + dns_name = (known after apply) 2025-02-10 08:32:31.145934 | orchestrator | 08:32:31.143 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.145938 | orchestrator | 08:32:31.143 STDOUT terraform:  + mac_address = (known after apply) 2025-02-10 08:32:31.145943 | orchestrator | 08:32:31.143 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.145948 | orchestrator | 08:32:31.143 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-10 08:32:31.145953 | orchestrator | 08:32:31.143 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-10 08:32:31.145957 | orchestrator | 08:32:31.144 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.145962 | orchestrator | 08:32:31.144 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-10 08:32:31.145968 | orchestrator | 08:32:31.144 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.145972 | orchestrator | 08:32:31.144 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145981 | orchestrator | 08:32:31.144 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-10 08:32:31.145986 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.145991 | orchestrator | 08:32:31.144 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.145996 | orchestrator | 08:32:31.144 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-10 08:32:31.146004 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.146009 | orchestrator | 08:32:31.144 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.146048 | orchestrator | 08:32:31.144 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-10 08:32:31.146054 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.146059 | orchestrator | 08:32:31.144 STDOUT terraform:  + allowed_address_pairs { 2025-02-10 08:32:31.146065 | orchestrator | 08:32:31.144 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-10 08:32:31.146070 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.146075 | orchestrator | 08:32:31.144 STDOUT terraform:  + binding (known after apply) 2025-02-10 08:32:31.146081 | orchestrator | 08:32:31.144 STDOUT terraform:  + fixed_ip { 2025-02-10 08:32:31.146086 | orchestrator | 08:32:31.144 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-02-10 08:32:31.146093 | orchestrator | 08:32:31.144 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.146098 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.146103 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.146109 | orchestrator | 08:32:31.144 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-02-10 08:32:31.146116 | orchestrator | 08:32:31.144 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-02-10 08:32:31.146121 | orchestrator | 08:32:31.144 STDOUT terraform:  + force_destroy = false 2025-02-10 08:32:31.146127 | orchestrator | 08:32:31.144 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146132 | orchestrator | 08:32:31.144 STDOUT terraform:  + port_id = (known after apply) 2025-02-10 08:32:31.146137 | orchestrator | 08:32:31.144 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146142 | orchestrator | 08:32:31.144 STDOUT terraform:  + router_id = (known after apply) 2025-02-10 08:32:31.146146 | orchestrator | 08:32:31.144 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-10 08:32:31.146152 | orchestrator | 08:32:31.144 STDOUT terraform:  } 2025-02-10 08:32:31.146157 | orchestrator | 08:32:31.144 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-02-10 08:32:31.146167 | orchestrator | 08:32:31.144 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-02-10 08:32:31.146172 | orchestrator | 08:32:31.144 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-10 08:32:31.146178 | orchestrator | 08:32:31.144 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.146182 | orchestrator | 08:32:31.144 STDOUT terraform:  + availability_zone_hints = [ 2025-02-10 08:32:31.146187 | orchestrator | 08:32:31.144 STDOUT terraform:  + "nova", 2025-02-10 08:32:31.146192 | orchestrator | 08:32:31.144 STDOUT terraform:  ] 2025-02-10 08:32:31.146197 | orchestrator | 08:32:31.144 STDOUT terraform:  + distributed = (known after apply) 2025-02-10 08:32:31.146202 | orchestrator | 08:32:31.144 STDOUT terraform:  + enable_snat = (known after apply) 2025-02-10 08:32:31.146212 | orchestrator | 08:32:31.144 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-02-10 08:32:31.146217 | orchestrator | 08:32:31.144 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146222 | orchestrator | 08:32:31.144 STDOUT terraform:  + name = "testbed" 2025-02-10 08:32:31.146229 | orchestrator | 08:32:31.144 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146234 | orchestrator | 08:32:31.145 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.146240 | orchestrator | 08:32:31.145 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-02-10 08:32:31.146245 | orchestrator | 08:32:31.145 STDOUT terraform:  } 2025-02-10 08:32:31.146250 | orchestrator | 08:32:31.145 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-02-10 08:32:31.146257 | orchestrator | 08:32:31.145 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-02-10 08:32:31.146262 | orchestrator | 08:32:31.145 STDOUT terraform:  + description = "ssh" 2025-02-10 08:32:31.146267 | orchestrator | 08:32:31.145 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.146272 | orchestrator | 08:32:31.145 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.146277 | orchestrator | 08:32:31.145 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146282 | orchestrator | 08:32:31.145 STDOUT terraform:  + port_range_max = 22 2025-02-10 08:32:31.146286 | orchestrator | 08:32:31.145 STDOUT terraform:  + port_range_min = 22 2025-02-10 08:32:31.146291 | orchestrator | 08:32:31.145 STDOUT terraform:  + protocol = "tcp" 2025-02-10 08:32:31.146296 | orchestrator | 08:32:31.145 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146301 | orchestrator | 08:32:31.145 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.146306 | orchestrator | 08:32:31.145 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.146311 | orchestrator | 08:32:31.145 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.146316 | orchestrator | 08:32:31.145 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.146321 | orchestrator | 08:32:31.145 STDOUT terraform:  } 2025-02-10 08:32:31.146326 | orchestrator | 08:32:31.145 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-02-10 08:32:31.146331 | orchestrator | 08:32:31.145 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-02-10 08:32:31.146336 | orchestrator | 08:32:31.145 STDOUT terraform:  + description = "wireguard" 2025-02-10 08:32:31.146341 | orchestrator | 08:32:31.145 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.146345 | orchestrator | 08:32:31.145 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.146350 | orchestrator | 08:32:31.145 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146355 | orchestrator | 08:32:31.145 STDOUT terraform:  + port_range_max = 51820 2025-02-10 08:32:31.146367 | orchestrator | 08:32:31.145 STDOUT terraform:  + port_range_min = 51820 2025-02-10 08:32:31.146413 | orchestrator | 08:32:31.145 STDOUT terraform:  + protocol = "udp" 2025-02-10 08:32:31.146420 | orchestrator | 08:32:31.145 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146425 | orchestrator | 08:32:31.145 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.146434 | orchestrator | 08:32:31.145 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.146440 | orchestrator | 08:32:31.145 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.146445 | orchestrator | 08:32:31.145 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.146451 | orchestrator | 08:32:31.145 STDOUT terraform:  } 2025-02-10 08:32:31.146456 | orchestrator | 08:32:31.145 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-02-10 08:32:31.146461 | orchestrator | 08:32:31.145 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-02-10 08:32:31.146466 | orchestrator | 08:32:31.145 STDOUT terraform:  + direction 2025-02-10 08:32:31.146471 | orchestrator | 08:32:31.146 STDOUT terraform:  = "ingress" 2025-02-10 08:32:31.146476 | orchestrator | 08:32:31.146 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.146481 | orchestrator | 08:32:31.146 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146487 | orchestrator | 08:32:31.146 STDOUT terraform:  + protocol = "tcp" 2025-02-10 08:32:31.146492 | orchestrator | 08:32:31.146 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146497 | orchestrator | 08:32:31.146 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.146502 | orchestrator | 08:32:31.146 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-10 08:32:31.146507 | orchestrator | 08:32:31.146 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.146512 | orchestrator | 08:32:31.146 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.146517 | orchestrator | 08:32:31.146 STDOUT terraform:  } 2025-02-10 08:32:31.146522 | orchestrator | 08:32:31.146 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-02-10 08:32:31.146527 | orchestrator | 08:32:31.146 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-02-10 08:32:31.146535 | orchestrator | 08:32:31.146 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.146713 | orchestrator | 08:32:31.146 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.146723 | orchestrator | 08:32:31.146 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146728 | orchestrator | 08:32:31.146 STDOUT terraform:  + protocol = "udp" 2025-02-10 08:32:31.146738 | orchestrator | 08:32:31.146 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146743 | orchestrator | 08:32:31.146 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.146747 | orchestrator | 08:32:31.146 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-10 08:32:31.146763 | orchestrator | 08:32:31.146 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.146768 | orchestrator | 08:32:31.146 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.146773 | orchestrator | 08:32:31.146 STDOUT terraform:  } 2025-02-10 08:32:31.146782 | orchestrator | 08:32:31.146 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-02-10 08:32:31.146787 | orchestrator | 08:32:31.146 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-02-10 08:32:31.146795 | orchestrator | 08:32:31.146 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.146800 | orchestrator | 08:32:31.146 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.146807 | orchestrator | 08:32:31.146 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.146846 | orchestrator | 08:32:31.146 STDOUT terraform:  + protocol = "icmp" 2025-02-10 08:32:31.146862 | orchestrator | 08:32:31.146 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.146869 | orchestrator | 08:32:31.146 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.146901 | orchestrator | 08:32:31.146 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.146908 | orchestrator | 08:32:31.146 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.146934 | orchestrator | 08:32:31.146 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.147415 | orchestrator | 08:32:31.146 STDOUT terraform:  } 2025-02-10 08:32:31.147427 | orchestrator | 08:32:31.146 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-02-10 08:32:31.148448 | orchestrator | 08:32:31.146 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-02-10 08:32:31.148485 | orchestrator | 08:32:31.147 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.148496 | orchestrator | 08:32:31.147 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.148505 | orchestrator | 08:32:31.147 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.148515 | orchestrator | 08:32:31.147 STDOUT terraform:  + protocol = "tcp" 2025-02-10 08:32:31.148524 | orchestrator | 08:32:31.147 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.148533 | orchestrator | 08:32:31.147 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.148541 | orchestrator | 08:32:31.147 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.148549 | orchestrator | 08:32:31.147 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.148558 | orchestrator | 08:32:31.147 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.148566 | orchestrator | 08:32:31.147 STDOUT terraform:  } 2025-02-10 08:32:31.148575 | orchestrator | 08:32:31.147 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-02-10 08:32:31.148628 | orchestrator | 08:32:31.147 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-02-10 08:32:31.148648 | orchestrator | 08:32:31.147 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.148657 | orchestrator | 08:32:31.147 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.148666 | orchestrator | 08:32:31.147 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.148682 | orchestrator | 08:32:31.147 STDOUT terraform:  + protocol = "udp" 2025-02-10 08:32:31.148691 | orchestrator | 08:32:31.147 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.148699 | orchestrator | 08:32:31.147 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.148708 | orchestrator | 08:32:31.147 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.148716 | orchestrator | 08:32:31.147 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.148724 | orchestrator | 08:32:31.147 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.148733 | orchestrator | 08:32:31.147 STDOUT terraform:  } 2025-02-10 08:32:31.148742 | orchestrator | 08:32:31.147 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-02-10 08:32:31.148750 | orchestrator | 08:32:31.147 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-02-10 08:32:31.148759 | orchestrator | 08:32:31.147 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.148768 | orchestrator | 08:32:31.147 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.148776 | orchestrator | 08:32:31.147 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.148785 | orchestrator | 08:32:31.147 STDOUT terraform:  + protocol = "icmp" 2025-02-10 08:32:31.148793 | orchestrator | 08:32:31.147 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.148802 | orchestrator | 08:32:31.147 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.148810 | orchestrator | 08:32:31.147 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.148819 | orchestrator | 08:32:31.147 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.148828 | orchestrator | 08:32:31.147 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.148836 | orchestrator | 08:32:31.147 STDOUT terraform:  } 2025-02-10 08:32:31.148845 | orchestrator | 08:32:31.147 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-02-10 08:32:31.148854 | orchestrator | 08:32:31.147 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-02-10 08:32:31.148863 | orchestrator | 08:32:31.148 STDOUT terraform:  + description = "vrrp" 2025-02-10 08:32:31.148871 | orchestrator | 08:32:31.148 STDOUT terraform:  + direction = "ingress" 2025-02-10 08:32:31.148879 | orchestrator | 08:32:31.148 STDOUT terraform:  + ethertype = "IPv4" 2025-02-10 08:32:31.148888 | orchestrator | 08:32:31.148 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.148897 | orchestrator | 08:32:31.148 STDOUT terraform:  + protocol = "112" 2025-02-10 08:32:31.148910 | orchestrator | 08:32:31.148 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.148925 | orchestrator | 08:32:31.148 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-10 08:32:31.148935 | orchestrator | 08:32:31.148 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-10 08:32:31.148943 | orchestrator | 08:32:31.148 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-10 08:32:31.148951 | orchestrator | 08:32:31.148 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.148959 | orchestrator | 08:32:31.148 STDOUT terraform:  } 2025-02-10 08:32:31.148968 | orchestrator | 08:32:31.148 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-02-10 08:32:31.148977 | orchestrator | 08:32:31.148 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-02-10 08:32:31.148988 | orchestrator | 08:32:31.148 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.149057 | orchestrator | 08:32:31.148 STDOUT terraform:  + description = "management security group" 2025-02-10 08:32:31.149066 | orchestrator | 08:32:31.148 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.149075 | orchestrator | 08:32:31.148 STDOUT terraform:  + name = "testbed-management" 2025-02-10 08:32:31.149084 | orchestrator | 08:32:31.148 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.149092 | orchestrator | 08:32:31.148 STDOUT terraform:  + stateful = (known after apply) 2025-02-10 08:32:31.149100 | orchestrator | 08:32:31.148 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.149108 | orchestrator | 08:32:31.148 STDOUT terraform:  } 2025-02-10 08:32:31.149116 | orchestrator | 08:32:31.148 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-02-10 08:32:31.149127 | orchestrator | 08:32:31.148 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-02-10 08:32:31.149368 | orchestrator | 08:32:31.149 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.149389 | orchestrator | 08:32:31.149 STDOUT terraform:  + description = "node security group" 2025-02-10 08:32:31.149403 | orchestrator | 08:32:31.149 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.149411 | orchestrator | 08:32:31.149 STDOUT terraform:  + name = "testbed-node" 2025-02-10 08:32:31.149418 | orchestrator | 08:32:31.149 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.149427 | orchestrator | 08:32:31.149 STDOUT terraform:  + stateful = (known after apply) 2025-02-10 08:32:31.149434 | orchestrator | 08:32:31.149 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.149443 | orchestrator | 08:32:31.149 STDOUT terraform:  } 2025-02-10 08:32:31.149453 | orchestrator | 08:32:31.149 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-02-10 08:32:31.149514 | orchestrator | 08:32:31.149 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-02-10 08:32:31.149528 | orchestrator | 08:32:31.149 STDOUT terraform:  + all_tags = (known after apply) 2025-02-10 08:32:31.149572 | orchestrator | 08:32:31.149 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-02-10 08:32:31.149604 | orchestrator | 08:32:31.149 STDOUT terraform:  + dns_nameservers = [ 2025-02-10 08:32:31.149611 | orchestrator | 08:32:31.149 STDOUT terraform:  + "8.8.8.8", 2025-02-10 08:32:31.149619 | orchestrator | 08:32:31.149 STDOUT terraform:  + "9.9.9.9", 2025-02-10 08:32:31.149664 | orchestrator | 08:32:31.149 STDOUT terraform:  ] 2025-02-10 08:32:31.149672 | orchestrator | 08:32:31.149 STDOUT terraform:  + enable_dhcp = true 2025-02-10 08:32:31.149698 | orchestrator | 08:32:31.149 STDOUT terraform:  + gateway_ip = (known after apply) 2025-02-10 08:32:31.149745 | orchestrator | 08:32:31.149 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.149755 | orchestrator | 08:32:31.149 STDOUT terraform:  + ip_version = 4 2025-02-10 08:32:31.149814 | orchestrator | 08:32:31.149 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-02-10 08:32:31.149844 | orchestrator | 08:32:31.149 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-02-10 08:32:31.149899 | orchestrator | 08:32:31.149 STDOUT terraform:  + name = "subnet-testbed-management" 2025-02-10 08:32:31.149929 | orchestrator | 08:32:31.149 STDOUT terraform:  + network_id = (known after apply) 2025-02-10 08:32:31.149967 | orchestrator | 08:32:31.149 STDOUT terraform:  + no_gateway = false 2025-02-10 08:32:31.149998 | orchestrator | 08:32:31.149 STDOUT terraform:  + region = (known after apply) 2025-02-10 08:32:31.150061 | orchestrator | 08:32:31.149 STDOUT terraform:  + service_types = (known after apply) 2025-02-10 08:32:31.150087 | orchestrator | 08:32:31.150 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-10 08:32:31.150095 | orchestrator | 08:32:31.150 STDOUT terraform:  + allocation_pool { 2025-02-10 08:32:31.150139 | orchestrator | 08:32:31.150 STDOUT terraform:  + end = "192.168.31.250" 2025-02-10 08:32:31.150164 | orchestrator | 08:32:31.150 STDOUT terraform:  + start = "192.168.31.200" 2025-02-10 08:32:31.150171 | orchestrator | 08:32:31.150 STDOUT terraform:  } 2025-02-10 08:32:31.150178 | orchestrator | 08:32:31.150 STDOUT terraform:  } 2025-02-10 08:32:31.150207 | orchestrator | 08:32:31.150 STDOUT terraform:  # terraform_data.image will be created 2025-02-10 08:32:31.150215 | orchestrator | 08:32:31.150 STDOUT terraform:  + resource "terraform_data" "image" { 2025-02-10 08:32:31.150252 | orchestrator | 08:32:31.150 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.150260 | orchestrator | 08:32:31.150 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-10 08:32:31.150293 | orchestrator | 08:32:31.150 STDOUT terraform:  + output = (known after apply) 2025-02-10 08:32:31.150330 | orchestrator | 08:32:31.150 STDOUT terraform:  } 2025-02-10 08:32:31.150338 | orchestrator | 08:32:31.150 STDOUT terraform:  # terraform_data.image_node will be created 2025-02-10 08:32:31.150346 | orchestrator | 08:32:31.150 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-02-10 08:32:31.150384 | orchestrator | 08:32:31.150 STDOUT terraform:  + id = (known after apply) 2025-02-10 08:32:31.150393 | orchestrator | 08:32:31.150 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-10 08:32:31.150425 | orchestrator | 08:32:31.150 STDOUT terraform:  + output = (known after apply) 2025-02-10 08:32:31.150464 | orchestrator | 08:32:31.150 STDOUT terraform:  } 2025-02-10 08:32:31.150473 | orchestrator | 08:32:31.150 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-02-10 08:32:31.150507 | orchestrator | 08:32:31.150 STDOUT terraform: Changes to Outputs: 2025-02-10 08:32:31.150517 | orchestrator | 08:32:31.150 STDOUT terraform:  + manager_address = (sensitive value) 2025-02-10 08:32:31.150525 | orchestrator | 08:32:31.150 STDOUT terraform:  + private_key = (sensitive value) 2025-02-10 08:32:31.238418 | orchestrator | 08:32:31.238 STDOUT terraform: terraform_data.image: Creating... 2025-02-10 08:32:31.376575 | orchestrator | 08:32:31.238 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=98a62c04-2918-f265-6a62-e3987e90e4cb] 2025-02-10 08:32:31.376706 | orchestrator | 08:32:31.373 STDOUT terraform: terraform_data.image_node: Creating... 2025-02-10 08:32:31.384416 | orchestrator | 08:32:31.373 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=dfe50c48-d116-5f30-2435-80a20440c213] 2025-02-10 08:32:31.384516 | orchestrator | 08:32:31.382 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-02-10 08:32:31.391757 | orchestrator | 08:32:31.391 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-02-10 08:32:31.392147 | orchestrator | 08:32:31.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-02-10 08:32:31.392238 | orchestrator | 08:32:31.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-02-10 08:32:31.392525 | orchestrator | 08:32:31.392 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-02-10 08:32:31.414121 | orchestrator | 08:32:31.407 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-02-10 08:32:31.852950 | orchestrator | 08:32:31.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-02-10 08:32:31.853121 | orchestrator | 08:32:31.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-02-10 08:32:31.853146 | orchestrator | 08:32:31.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-02-10 08:32:31.853162 | orchestrator | 08:32:31.409 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-02-10 08:32:31.853203 | orchestrator | 08:32:31.852 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-10 08:32:31.860824 | orchestrator | 08:32:31.852 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-10 08:32:31.860948 | orchestrator | 08:32:31.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-02-10 08:32:31.866369 | orchestrator | 08:32:31.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-02-10 08:32:31.986245 | orchestrator | 08:32:31.985 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-02-10 08:32:31.995473 | orchestrator | 08:32:31.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-02-10 08:32:37.250486 | orchestrator | 08:32:37.249 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=6d95ca8c-36b0-4a81-9b9c-8696dfcef6b0] 2025-02-10 08:32:37.261769 | orchestrator | 08:32:37.261 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-02-10 08:32:41.394742 | orchestrator | 08:32:41.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-02-10 08:32:41.403859 | orchestrator | 08:32:41.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-02-10 08:32:41.403990 | orchestrator | 08:32:41.403 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-02-10 08:32:41.404995 | orchestrator | 08:32:41.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-02-10 08:32:41.406183 | orchestrator | 08:32:41.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-02-10 08:32:41.411707 | orchestrator | 08:32:41.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-02-10 08:32:41.861922 | orchestrator | 08:32:41.861 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-02-10 08:32:41.866251 | orchestrator | 08:32:41.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-02-10 08:32:41.996151 | orchestrator | 08:32:41.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-02-10 08:32:41.998762 | orchestrator | 08:32:41.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=a5ae359e-12ae-4197-8eef-3ae34f8c1334] 2025-02-10 08:32:42.005444 | orchestrator | 08:32:42.005 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-02-10 08:32:42.006440 | orchestrator | 08:32:42.006 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=3635edd1-676b-4d23-b864-ce2187808155] 2025-02-10 08:32:42.009880 | orchestrator | 08:32:42.009 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=d0052f5e-c6c0-4052-8cbb-79a9efbad2c5] 2025-02-10 08:32:42.011957 | orchestrator | 08:32:42.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-02-10 08:32:42.013641 | orchestrator | 08:32:42.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-02-10 08:32:42.043500 | orchestrator | 08:32:42.043 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=b66e53a8-0538-4d41-8a28-7ec132d4688f] 2025-02-10 08:32:42.049356 | orchestrator | 08:32:42.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=c812b351-48e5-4920-9aaa-4a69febb969f] 2025-02-10 08:32:42.049538 | orchestrator | 08:32:42.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=3f2f0c75-1857-43ef-b86a-d1c385559ce2] 2025-02-10 08:32:42.058753 | orchestrator | 08:32:42.058 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-02-10 08:32:42.059721 | orchestrator | 08:32:42.059 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-02-10 08:32:42.060480 | orchestrator | 08:32:42.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-02-10 08:32:42.104756 | orchestrator | 08:32:42.104 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=b9150377-bf23-4053-9d8b-4b6b16705e51] 2025-02-10 08:32:42.112423 | orchestrator | 08:32:42.112 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-02-10 08:32:42.124832 | orchestrator | 08:32:42.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=4c8bf85e-c93c-4dde-a0b9-becc690957dc] 2025-02-10 08:32:42.130344 | orchestrator | 08:32:42.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-02-10 08:32:42.174118 | orchestrator | 08:32:42.173 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=75df373f-19f7-4c01-b032-3384165fc32e] 2025-02-10 08:32:42.184035 | orchestrator | 08:32:42.183 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-02-10 08:32:47.262348 | orchestrator | 08:32:47.261 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-02-10 08:32:47.428777 | orchestrator | 08:32:47.428 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=30eee918-495f-46ac-9f20-7bf018cd9f92] 2025-02-10 08:32:47.439953 | orchestrator | 08:32:47.439 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-02-10 08:32:52.007000 | orchestrator | 08:32:52.006 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-02-10 08:32:52.013500 | orchestrator | 08:32:52.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-02-10 08:32:52.014830 | orchestrator | 08:32:52.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-02-10 08:32:52.060670 | orchestrator | 08:32:52.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-02-10 08:32:52.060780 | orchestrator | 08:32:52.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-02-10 08:32:52.060977 | orchestrator | 08:32:52.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-02-10 08:32:52.114134 | orchestrator | 08:32:52.113 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-02-10 08:32:52.131776 | orchestrator | 08:32:52.131 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-02-10 08:32:52.184695 | orchestrator | 08:32:52.184 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-02-10 08:32:52.185613 | orchestrator | 08:32:52.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=2438f8bd-e1da-4f87-b9a4-97b4ac996f9c] 2025-02-10 08:32:53.269008 | orchestrator | 08:32:53.268 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=7291aabe-5e3f-438e-8469-36f2cb5c6009] 2025-02-10 08:32:53.277039 | orchestrator | 08:32:53.276 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=847baef5-49eb-4270-9699-f3453f51c947] 2025-02-10 08:32:53.279355 | orchestrator | 08:32:53.279 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=5f0d01b9-0e02-4dee-9565-cff6803c305a] 2025-02-10 08:32:53.284248 | orchestrator | 08:32:53.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-02-10 08:32:53.287419 | orchestrator | 08:32:53.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=2f4b37ab-ea48-4e89-a573-74f28832e598] 2025-02-10 08:32:53.287717 | orchestrator | 08:32:53.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=f26c39ad-11ff-4bfe-ad92-01d3e6216f06] 2025-02-10 08:32:53.289665 | orchestrator | 08:32:53.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a] 2025-02-10 08:32:53.289967 | orchestrator | 08:32:53.289 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=42cc0269-ef61-400f-abce-84bd5d105328] 2025-02-10 08:32:53.290930 | orchestrator | 08:32:53.290 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=1ab0a82a-cefc-4a53-8b35-3a0c471d1d44] 2025-02-10 08:32:53.296357 | orchestrator | 08:32:53.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-02-10 08:32:53.299085 | orchestrator | 08:32:53.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-02-10 08:32:53.299228 | orchestrator | 08:32:53.299 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-02-10 08:32:53.301950 | orchestrator | 08:32:53.301 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-02-10 08:32:53.307010 | orchestrator | 08:32:53.306 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=4c3a91efdce5e48d52c9e5d9d8bf69571ae37534] 2025-02-10 08:32:53.307213 | orchestrator | 08:32:53.307 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-02-10 08:32:53.309180 | orchestrator | 08:32:53.309 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-02-10 08:32:53.309950 | orchestrator | 08:32:53.309 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-02-10 08:32:53.319376 | orchestrator | 08:32:53.319 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=0c9a6a2d9657c61fe8675ad7f5a40562bffed0ea] 2025-02-10 08:32:57.441094 | orchestrator | 08:32:57.440 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-02-10 08:32:57.771925 | orchestrator | 08:32:57.771 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=3e6955b6-ceeb-4871-99fa-6f4d00721e84] 2025-02-10 08:32:59.019031 | orchestrator | 08:32:59.018 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=020f0323-f0e0-4ac4-a624-b7ff55a0f582] 2025-02-10 08:32:59.041760 | orchestrator | 08:32:59.041 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-02-10 08:33:03.288944 | orchestrator | 08:33:03.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-02-10 08:33:03.299169 | orchestrator | 08:33:03.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-02-10 08:33:03.301281 | orchestrator | 08:33:03.301 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-02-10 08:33:03.301479 | orchestrator | 08:33:03.301 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-02-10 08:33:03.311068 | orchestrator | 08:33:03.310 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-02-10 08:33:03.647808 | orchestrator | 08:33:03.647 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=d47abd3b-400c-4af9-8fd5-b0027775d899] 2025-02-10 08:33:03.673001 | orchestrator | 08:33:03.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=f264afce-82d2-497c-9a77-eb4255e0ba66] 2025-02-10 08:33:03.700946 | orchestrator | 08:33:03.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=ec7b193c-21f8-4d72-a19d-1fec7ab5cb66] 2025-02-10 08:33:03.705921 | orchestrator | 08:33:03.705 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=232a29e8-485f-4033-b159-19c4e9acd946] 2025-02-10 08:33:03.727752 | orchestrator | 08:33:03.727 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=2afb4105-92b8-4f06-8361-8ae3b6c04642] 2025-02-10 08:33:05.760888 | orchestrator | 08:33:05.760 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=32148d71-c2d8-4d32-9a1e-e17af68d4b50] 2025-02-10 08:33:05.772951 | orchestrator | 08:33:05.772 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-02-10 08:33:05.773096 | orchestrator | 08:33:05.772 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-02-10 08:33:05.906281 | orchestrator | 08:33:05.772 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-02-10 08:33:05.906461 | orchestrator | 08:33:05.904 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=67746afe-8765-459c-bb61-c5dbf7e70558] 2025-02-10 08:33:05.928402 | orchestrator | 08:33:05.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-02-10 08:33:05.928500 | orchestrator | 08:33:05.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-02-10 08:33:05.928527 | orchestrator | 08:33:05.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-02-10 08:33:05.928691 | orchestrator | 08:33:05.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-02-10 08:33:05.928832 | orchestrator | 08:33:05.928 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-02-10 08:33:05.928953 | orchestrator | 08:33:05.928 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-02-10 08:33:05.935082 | orchestrator | 08:33:05.934 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4f0ccb23-59fd-4e2c-83b5-992ad6c49d87] 2025-02-10 08:33:05.942263 | orchestrator | 08:33:05.942 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-02-10 08:33:05.943207 | orchestrator | 08:33:05.943 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-02-10 08:33:05.944965 | orchestrator | 08:33:05.944 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-02-10 08:33:06.149823 | orchestrator | 08:33:06.149 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=a33e90d0-0af4-402b-9be3-333a240cb99b] 2025-02-10 08:33:06.163885 | orchestrator | 08:33:06.163 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-02-10 08:33:06.166962 | orchestrator | 08:33:06.166 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=fd302105-25df-42bc-b4e0-cf997fc5282c] 2025-02-10 08:33:06.183413 | orchestrator | 08:33:06.182 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-02-10 08:33:06.409393 | orchestrator | 08:33:06.408 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=71487095-95aa-4238-90a5-f77a3d935b45] 2025-02-10 08:33:06.434146 | orchestrator | 08:33:06.433 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-02-10 08:33:06.546126 | orchestrator | 08:33:06.545 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c5c4f844-a5e5-457d-a5f5-38e7fb4ad60d] 2025-02-10 08:33:06.559972 | orchestrator | 08:33:06.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-02-10 08:33:06.698606 | orchestrator | 08:33:06.698 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=e55aa025-8ffc-4b12-8ffe-3ebff33b4790] 2025-02-10 08:33:06.711315 | orchestrator | 08:33:06.711 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-02-10 08:33:06.840546 | orchestrator | 08:33:06.840 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=38d0c14a-d9a4-45d4-a184-f473526e20c7] 2025-02-10 08:33:06.847358 | orchestrator | 08:33:06.847 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-02-10 08:33:06.975915 | orchestrator | 08:33:06.975 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=4c5cf922-e9b7-4bb2-9ff5-74dc974ece4f] 2025-02-10 08:33:06.982540 | orchestrator | 08:33:06.982 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-02-10 08:33:07.057793 | orchestrator | 08:33:07.057 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=2cc5760d-e886-47ff-be7f-0f4f89ee0c3c] 2025-02-10 08:33:07.179678 | orchestrator | 08:33:07.179 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=25408689-4152-4e00-a0e9-8f323bc64e8e] 2025-02-10 08:33:11.638445 | orchestrator | 08:33:11.637 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=ce38631a-bbdc-4d5c-bd19-0fe3676c8ddc] 2025-02-10 08:33:12.001415 | orchestrator | 08:33:12.000 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=e843c608-a0ea-47b5-a0f5-4a3aeda9e73f] 2025-02-10 08:33:12.110295 | orchestrator | 08:33:12.109 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=0ef370d4-f39f-4233-94fa-2f42b2770a97] 2025-02-10 08:33:12.273256 | orchestrator | 08:33:12.272 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=a48acd54-5d6a-4f33-8661-1a19ba302342] 2025-02-10 08:33:12.568812 | orchestrator | 08:33:12.568 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 7s [id=d05fb940-0e4c-46d3-bd9c-29d8b1399bfb] 2025-02-10 08:33:12.827371 | orchestrator | 08:33:12.826 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=a3bd5dfb-0b49-4703-b4bd-ba20ebbd2f2f] 2025-02-10 08:33:12.902123 | orchestrator | 08:33:12.901 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=c39d7066-cd00-417d-944b-b656e8cf9331] 2025-02-10 08:33:13.391102 | orchestrator | 08:33:13.390 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=e8511c76-53d3-4dbc-b130-80d3d9eafbe7] 2025-02-10 08:33:13.418039 | orchestrator | 08:33:13.417 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-02-10 08:33:13.430110 | orchestrator | 08:33:13.429 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-02-10 08:33:13.441643 | orchestrator | 08:33:13.441 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-02-10 08:33:13.443944 | orchestrator | 08:33:13.443 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-02-10 08:33:13.444274 | orchestrator | 08:33:13.444 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-02-10 08:33:13.462302 | orchestrator | 08:33:13.461 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-02-10 08:33:13.462881 | orchestrator | 08:33:13.462 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-02-10 08:33:19.916648 | orchestrator | 08:33:19.916 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=dee6c997-418d-4227-a15c-1f1456ac5c62] 2025-02-10 08:33:19.927860 | orchestrator | 08:33:19.927 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-02-10 08:33:19.933040 | orchestrator | 08:33:19.932 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-02-10 08:33:19.933966 | orchestrator | 08:33:19.933 STDOUT terraform: local_file.inventory: Creating... 2025-02-10 08:33:19.939878 | orchestrator | 08:33:19.939 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=faf67af2465dd7e1c14275b3d80194e251778fd6] 2025-02-10 08:33:19.940557 | orchestrator | 08:33:19.940 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a2979892ca02f0e2be0a5ee90fb6db648c1af664] 2025-02-10 08:33:20.447157 | orchestrator | 08:33:20.446 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=dee6c997-418d-4227-a15c-1f1456ac5c62] 2025-02-10 08:33:23.431439 | orchestrator | 08:33:23.431 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-02-10 08:33:23.444796 | orchestrator | 08:33:23.444 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-02-10 08:33:23.447019 | orchestrator | 08:33:23.446 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-02-10 08:33:23.463361 | orchestrator | 08:33:23.446 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-02-10 08:33:23.463498 | orchestrator | 08:33:23.463 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-02-10 08:33:23.464365 | orchestrator | 08:33:23.464 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-02-10 08:33:33.432308 | orchestrator | 08:33:33.431 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-02-10 08:33:33.445813 | orchestrator | 08:33:33.445 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-02-10 08:33:33.447964 | orchestrator | 08:33:33.447 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-02-10 08:33:33.448012 | orchestrator | 08:33:33.447 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-02-10 08:33:33.464329 | orchestrator | 08:33:33.464 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-02-10 08:33:33.465628 | orchestrator | 08:33:33.465 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-02-10 08:33:33.861811 | orchestrator | 08:33:33.861 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=f227cb0b-17b5-4625-9f95-208cc3e33027] 2025-02-10 08:33:33.984188 | orchestrator | 08:33:33.982 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=e3c8536f-9243-498b-9cd5-0c601f9d8764] 2025-02-10 08:33:34.275104 | orchestrator | 08:33:34.274 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=3f2b58fa-f995-42ec-bb8f-a4524a505872] 2025-02-10 08:33:43.447146 | orchestrator | 08:33:43.446 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-02-10 08:33:43.449181 | orchestrator | 08:33:43.448 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-02-10 08:33:43.466793 | orchestrator | 08:33:43.466 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-02-10 08:33:44.072786 | orchestrator | 08:33:44.072 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=021b9289-3edb-47d2-8e7c-2d961d99cc79] 2025-02-10 08:33:44.391511 | orchestrator | 08:33:44.391 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=714bce67-d8e5-4445-9de3-fba419f13f0e] 2025-02-10 08:33:45.043299 | orchestrator | 08:33:45.042 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=915950ee-cbc3-4e18-8903-c1a7b194cd5f] 2025-02-10 08:33:45.067280 | orchestrator | 08:33:45.067 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-02-10 08:33:45.076437 | orchestrator | 08:33:45.076 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2300055860566196604] 2025-02-10 08:33:45.080118 | orchestrator | 08:33:45.079 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-02-10 08:33:45.080962 | orchestrator | 08:33:45.080 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-02-10 08:33:45.084392 | orchestrator | 08:33:45.084 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-02-10 08:33:45.092783 | orchestrator | 08:33:45.091 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-02-10 08:33:45.109513 | orchestrator | 08:33:45.106 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-02-10 08:33:45.111355 | orchestrator | 08:33:45.111 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-02-10 08:33:45.112988 | orchestrator | 08:33:45.112 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-02-10 08:33:45.114388 | orchestrator | 08:33:45.114 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-02-10 08:33:45.115023 | orchestrator | 08:33:45.114 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-02-10 08:33:45.115063 | orchestrator | 08:33:45.114 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-02-10 08:33:50.467873 | orchestrator | 08:33:50.467 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=021b9289-3edb-47d2-8e7c-2d961d99cc79/75df373f-19f7-4c01-b032-3384165fc32e] 2025-02-10 08:33:50.469114 | orchestrator | 08:33:50.468 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=3f2b58fa-f995-42ec-bb8f-a4524a505872/d0052f5e-c6c0-4052-8cbb-79a9efbad2c5] 2025-02-10 08:33:50.489638 | orchestrator | 08:33:50.489 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-02-10 08:33:50.491167 | orchestrator | 08:33:50.490 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-02-10 08:33:50.505802 | orchestrator | 08:33:50.505 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=e3c8536f-9243-498b-9cd5-0c601f9d8764/c812b351-48e5-4920-9aaa-4a69febb969f] 2025-02-10 08:33:50.508346 | orchestrator | 08:33:50.508 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=f227cb0b-17b5-4625-9f95-208cc3e33027/30eee918-495f-46ac-9f20-7bf018cd9f92] 2025-02-10 08:33:50.520127 | orchestrator | 08:33:50.519 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-02-10 08:33:50.520301 | orchestrator | 08:33:50.520 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-02-10 08:33:50.521879 | orchestrator | 08:33:50.521 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=915950ee-cbc3-4e18-8903-c1a7b194cd5f/2438f8bd-e1da-4f87-b9a4-97b4ac996f9c] 2025-02-10 08:33:50.537125 | orchestrator | 08:33:50.536 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=714bce67-d8e5-4445-9de3-fba419f13f0e/b9150377-bf23-4053-9d8b-4b6b16705e51] 2025-02-10 08:33:50.543913 | orchestrator | 08:33:50.543 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=e3c8536f-9243-498b-9cd5-0c601f9d8764/3635edd1-676b-4d23-b864-ce2187808155] 2025-02-10 08:33:50.544531 | orchestrator | 08:33:50.544 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-02-10 08:33:50.557754 | orchestrator | 08:33:50.557 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=915950ee-cbc3-4e18-8903-c1a7b194cd5f/a5ae359e-12ae-4197-8eef-3ae34f8c1334] 2025-02-10 08:33:50.559240 | orchestrator | 08:33:50.559 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-02-10 08:33:50.560326 | orchestrator | 08:33:50.560 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-02-10 08:33:50.567458 | orchestrator | 08:33:50.567 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-02-10 08:33:50.578002 | orchestrator | 08:33:50.577 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=714bce67-d8e5-4445-9de3-fba419f13f0e/5f0d01b9-0e02-4dee-9565-cff6803c305a] 2025-02-10 08:33:50.600739 | orchestrator | 08:33:50.600 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-02-10 08:33:50.656547 | orchestrator | 08:33:50.656 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=714bce67-d8e5-4445-9de3-fba419f13f0e/2f4b37ab-ea48-4e89-a573-74f28832e598] 2025-02-10 08:33:55.840104 | orchestrator | 08:33:55.839 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=915950ee-cbc3-4e18-8903-c1a7b194cd5f/b66e53a8-0538-4d41-8a28-7ec132d4688f] 2025-02-10 08:33:55.876787 | orchestrator | 08:33:55.876 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=021b9289-3edb-47d2-8e7c-2d961d99cc79/4c8bf85e-c93c-4dde-a0b9-becc690957dc] 2025-02-10 08:33:55.912511 | orchestrator | 08:33:55.912 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=021b9289-3edb-47d2-8e7c-2d961d99cc79/3f2f0c75-1857-43ef-b86a-d1c385559ce2] 2025-02-10 08:33:55.914180 | orchestrator | 08:33:55.913 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=e3c8536f-9243-498b-9cd5-0c601f9d8764/7291aabe-5e3f-438e-8469-36f2cb5c6009] 2025-02-10 08:33:55.938382 | orchestrator | 08:33:55.937 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=f227cb0b-17b5-4625-9f95-208cc3e33027/8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a] 2025-02-10 08:33:55.943655 | orchestrator | 08:33:55.943 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=3f2b58fa-f995-42ec-bb8f-a4524a505872/847baef5-49eb-4270-9699-f3453f51c947] 2025-02-10 08:33:55.952341 | orchestrator | 08:33:55.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=f227cb0b-17b5-4625-9f95-208cc3e33027/f26c39ad-11ff-4bfe-ad92-01d3e6216f06] 2025-02-10 08:33:55.968265 | orchestrator | 08:33:55.967 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=3f2b58fa-f995-42ec-bb8f-a4524a505872/1ab0a82a-cefc-4a53-8b35-3a0c471d1d44] 2025-02-10 08:34:00.602871 | orchestrator | 08:34:00.602 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-02-10 08:34:10.603869 | orchestrator | 08:34:10.603 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-02-10 08:34:11.232238 | orchestrator | 08:34:11.231 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f4e67e9d-4146-4b46-a6ba-3422486945ab] 2025-02-10 08:34:11.249656 | orchestrator | 08:34:11.249 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-02-10 08:34:11.249777 | orchestrator | 08:34:11.249 STDOUT terraform: Outputs: 2025-02-10 08:34:11.249798 | orchestrator | 08:34:11.249 STDOUT terraform: manager_address = 2025-02-10 08:34:11.249821 | orchestrator | 08:34:11.249 STDOUT terraform: private_key = 2025-02-10 08:34:11.504336 | orchestrator | changed 2025-02-10 08:34:11.541893 | 2025-02-10 08:34:11.542017 | TASK [Create infrastructure (stable)] 2025-02-10 08:34:11.641569 | orchestrator | skipping: Conditional result was False 2025-02-10 08:34:11.662083 | 2025-02-10 08:34:11.662258 | TASK [Fetch manager address] 2025-02-10 08:34:22.598954 | orchestrator | ok 2025-02-10 08:34:22.618193 | 2025-02-10 08:34:22.618352 | TASK [Set manager_host address] 2025-02-10 08:34:22.723423 | orchestrator | ok 2025-02-10 08:34:22.734602 | 2025-02-10 08:34:22.734754 | LOOP [Update ansible collections] 2025-02-10 08:34:26.928835 | orchestrator | changed 2025-02-10 08:34:30.830532 | orchestrator | changed 2025-02-10 08:34:30.848721 | 2025-02-10 08:34:30.848890 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-10 08:34:41.429894 | orchestrator | ok 2025-02-10 08:34:41.446684 | 2025-02-10 08:34:41.446959 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-10 08:35:41.495447 | orchestrator | ok 2025-02-10 08:35:41.506440 | 2025-02-10 08:35:41.506542 | TASK [Fetch manager ssh hostkey] 2025-02-10 08:35:42.597615 | orchestrator | Output suppressed because no_log was given 2025-02-10 08:35:42.609090 | 2025-02-10 08:35:42.609238 | TASK [Get ssh keypair from terraform environment] 2025-02-10 08:35:43.159211 | orchestrator | changed 2025-02-10 08:35:43.178892 | 2025-02-10 08:35:43.179029 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-10 08:35:43.228039 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-02-10 08:35:43.240510 | 2025-02-10 08:35:43.240644 | TASK [Run manager part 0] 2025-02-10 08:35:44.224023 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-10 08:35:44.320918 | orchestrator | 2025-02-10 08:35:46.444668 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-02-10 08:35:46.444767 | orchestrator | 2025-02-10 08:35:46.444791 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-02-10 08:35:46.444818 | orchestrator | ok: [testbed-manager] 2025-02-10 08:35:48.515860 | orchestrator | 2025-02-10 08:35:48.515926 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-10 08:35:48.515937 | orchestrator | 2025-02-10 08:35:48.515944 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:35:48.515957 | orchestrator | ok: [testbed-manager] 2025-02-10 08:35:49.226286 | orchestrator | 2025-02-10 08:35:49.226365 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-10 08:35:49.226383 | orchestrator | ok: [testbed-manager] 2025-02-10 08:35:49.269883 | orchestrator | 2025-02-10 08:35:49.269947 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-10 08:35:49.269967 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:49.305507 | orchestrator | 2025-02-10 08:35:49.305578 | orchestrator | TASK [Update package cache] **************************************************** 2025-02-10 08:35:49.305596 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:49.352020 | orchestrator | 2025-02-10 08:35:49.352079 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-10 08:35:49.352099 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:49.389082 | orchestrator | 2025-02-10 08:35:49.389138 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-10 08:35:49.389155 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:49.420266 | orchestrator | 2025-02-10 08:35:49.420312 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-10 08:35:49.420327 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:49.450422 | orchestrator | 2025-02-10 08:35:49.450472 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-02-10 08:35:49.450487 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:49.481528 | orchestrator | 2025-02-10 08:35:49.481609 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-02-10 08:35:49.481625 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:35:50.374955 | orchestrator | 2025-02-10 08:35:50.375034 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-02-10 08:35:50.375053 | orchestrator | changed: [testbed-manager] 2025-02-10 08:38:16.183282 | orchestrator | 2025-02-10 08:38:16.183388 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-02-10 08:38:16.183438 | orchestrator | changed: [testbed-manager] 2025-02-10 08:39:20.980388 | orchestrator | 2025-02-10 08:39:20.980523 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-10 08:39:20.980601 | orchestrator | changed: [testbed-manager] 2025-02-10 08:39:43.563653 | orchestrator | 2025-02-10 08:39:43.563703 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-10 08:39:43.563823 | orchestrator | changed: [testbed-manager] 2025-02-10 08:39:52.150661 | orchestrator | 2025-02-10 08:39:52.150792 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-10 08:39:52.150832 | orchestrator | changed: [testbed-manager] 2025-02-10 08:39:52.200272 | orchestrator | 2025-02-10 08:39:52.200462 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-10 08:39:52.200540 | orchestrator | ok: [testbed-manager] 2025-02-10 08:39:53.079993 | orchestrator | 2025-02-10 08:39:53.080215 | orchestrator | TASK [Get current user] ******************************************************** 2025-02-10 08:39:53.080244 | orchestrator | ok: [testbed-manager] 2025-02-10 08:39:53.834352 | orchestrator | 2025-02-10 08:39:53.834451 | orchestrator | TASK [Create venv directory] *************************************************** 2025-02-10 08:39:53.834487 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:00.358528 | orchestrator | 2025-02-10 08:40:00.358644 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-02-10 08:40:00.358678 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:06.533084 | orchestrator | 2025-02-10 08:40:06.533245 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-02-10 08:40:06.533309 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:09.218241 | orchestrator | 2025-02-10 08:40:09.218367 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-02-10 08:40:09.218420 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:11.020877 | orchestrator | 2025-02-10 08:40:11.021020 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-02-10 08:40:11.021058 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:12.204729 | orchestrator | 2025-02-10 08:40:12.204845 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-02-10 08:40:12.204880 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-10 08:40:12.251227 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-10 08:40:12.251340 | orchestrator | 2025-02-10 08:40:12.251361 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-02-10 08:40:12.251392 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-10 08:40:15.485662 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-10 08:40:15.485751 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-10 08:40:15.485770 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-10 08:40:15.485817 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-10 08:40:16.028182 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-10 08:40:16.028250 | orchestrator | 2025-02-10 08:40:16.028265 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-02-10 08:40:16.028286 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:38.081376 | orchestrator | 2025-02-10 08:40:38.081508 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-02-10 08:40:38.081589 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-02-10 08:40:40.412096 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-02-10 08:40:40.412214 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-02-10 08:40:40.412234 | orchestrator | 2025-02-10 08:40:40.412250 | orchestrator | TASK [Install local collections] *********************************************** 2025-02-10 08:40:40.412280 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-02-10 08:40:41.837338 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-02-10 08:40:41.837457 | orchestrator | 2025-02-10 08:40:41.837479 | orchestrator | PLAY [Create operator user] **************************************************** 2025-02-10 08:40:41.837495 | orchestrator | 2025-02-10 08:40:41.837511 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:40:41.837573 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:41.885985 | orchestrator | 2025-02-10 08:40:41.886126 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-10 08:40:41.886183 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:41.952534 | orchestrator | 2025-02-10 08:40:41.952713 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-10 08:40:41.952765 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:42.807694 | orchestrator | 2025-02-10 08:40:42.807808 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-10 08:40:42.807848 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:43.542085 | orchestrator | 2025-02-10 08:40:43.542203 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-10 08:40:43.542240 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:44.958346 | orchestrator | 2025-02-10 08:40:44.958453 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-10 08:40:44.958485 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-02-10 08:40:46.385750 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-02-10 08:40:46.385811 | orchestrator | 2025-02-10 08:40:46.385822 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-10 08:40:46.385841 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:48.198438 | orchestrator | 2025-02-10 08:40:48.198513 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-10 08:40:48.198572 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 08:40:48.793181 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-02-10 08:40:48.793299 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-02-10 08:40:48.793322 | orchestrator | 2025-02-10 08:40:48.793338 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-10 08:40:48.793379 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:48.867232 | orchestrator | 2025-02-10 08:40:48.867341 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-10 08:40:48.867369 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:49.809160 | orchestrator | 2025-02-10 08:40:49.809221 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-10 08:40:49.809242 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:40:49.850439 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:49.850491 | orchestrator | 2025-02-10 08:40:49.850501 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-10 08:40:49.850518 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:49.885560 | orchestrator | 2025-02-10 08:40:49.885617 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-10 08:40:49.885634 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:49.930914 | orchestrator | 2025-02-10 08:40:49.930972 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-10 08:40:49.930992 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:49.989062 | orchestrator | 2025-02-10 08:40:49.989115 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-10 08:40:49.989133 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:50.694231 | orchestrator | 2025-02-10 08:40:50.694281 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-10 08:40:50.694298 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:52.263588 | orchestrator | 2025-02-10 08:40:52.263642 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-10 08:40:52.263649 | orchestrator | 2025-02-10 08:40:52.263655 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:40:52.263668 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:53.269068 | orchestrator | 2025-02-10 08:40:53.269182 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-02-10 08:40:53.269218 | orchestrator | changed: [testbed-manager] 2025-02-10 08:40:53.386312 | orchestrator | 2025-02-10 08:40:53.386568 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:40:53.386598 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-02-10 08:40:53.386614 | orchestrator | 2025-02-10 08:40:53.520137 | orchestrator | changed 2025-02-10 08:40:53.539975 | 2025-02-10 08:40:53.540099 | TASK [Point out that the log in on the manager is now possible] 2025-02-10 08:40:53.591345 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-02-10 08:40:53.603335 | 2025-02-10 08:40:53.603511 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-10 08:40:53.656277 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-02-10 08:40:53.670117 | 2025-02-10 08:40:53.670249 | TASK [Run manager part 1 + 2] 2025-02-10 08:40:54.556486 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-10 08:40:54.677349 | orchestrator | 2025-02-10 08:40:57.280106 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-02-10 08:40:57.280168 | orchestrator | 2025-02-10 08:40:57.280186 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:40:57.280203 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:57.327669 | orchestrator | 2025-02-10 08:40:57.327761 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-10 08:40:57.327796 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:57.371058 | orchestrator | 2025-02-10 08:40:57.371118 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-10 08:40:57.371138 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:57.410285 | orchestrator | 2025-02-10 08:40:57.410348 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-10 08:40:57.410365 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:57.494442 | orchestrator | 2025-02-10 08:40:57.494530 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-10 08:40:57.494587 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:57.566239 | orchestrator | 2025-02-10 08:40:57.566327 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-10 08:40:57.566354 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:57.621709 | orchestrator | 2025-02-10 08:40:57.621761 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-10 08:40:57.621778 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-02-10 08:40:58.424790 | orchestrator | 2025-02-10 08:40:58.424898 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-10 08:40:58.424933 | orchestrator | ok: [testbed-manager] 2025-02-10 08:40:58.476415 | orchestrator | 2025-02-10 08:40:58.476524 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-10 08:40:58.476593 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:40:59.920601 | orchestrator | 2025-02-10 08:40:59.920672 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-10 08:40:59.920697 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:00.567655 | orchestrator | 2025-02-10 08:41:00.567784 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-10 08:41:00.567824 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:01.858159 | orchestrator | 2025-02-10 08:41:01.858253 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-10 08:41:01.858286 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:14.730182 | orchestrator | 2025-02-10 08:41:14.730286 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-10 08:41:14.730315 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:15.408257 | orchestrator | 2025-02-10 08:41:15.408369 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-10 08:41:15.408405 | orchestrator | ok: [testbed-manager] 2025-02-10 08:41:15.467060 | orchestrator | 2025-02-10 08:41:15.467164 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-10 08:41:15.467198 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:16.458501 | orchestrator | 2025-02-10 08:41:16.458618 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-02-10 08:41:16.458649 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:17.386737 | orchestrator | 2025-02-10 08:41:17.386874 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-02-10 08:41:17.386913 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:17.961983 | orchestrator | 2025-02-10 08:41:17.962122 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-02-10 08:41:17.962156 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:17.999314 | orchestrator | 2025-02-10 08:41:17.999419 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-02-10 08:41:17.999450 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-10 08:41:20.450036 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-10 08:41:20.450123 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-10 08:41:20.450132 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-10 08:41:20.450150 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:29.735622 | orchestrator | 2025-02-10 08:41:29.735715 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-02-10 08:41:29.735739 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-02-10 08:41:31.411989 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-02-10 08:41:31.412119 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-02-10 08:41:31.412138 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-02-10 08:41:31.412154 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-02-10 08:41:31.412169 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-02-10 08:41:31.412184 | orchestrator | 2025-02-10 08:41:31.412199 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-02-10 08:41:31.412251 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:31.460468 | orchestrator | 2025-02-10 08:41:31.460572 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-02-10 08:41:31.460597 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:41:34.598145 | orchestrator | 2025-02-10 08:41:34.598214 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-02-10 08:41:34.598234 | orchestrator | changed: [testbed-manager] 2025-02-10 08:41:34.641232 | orchestrator | 2025-02-10 08:41:34.641326 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-02-10 08:41:34.641358 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:43:13.745761 | orchestrator | 2025-02-10 08:43:13.745861 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-02-10 08:43:13.745896 | orchestrator | changed: [testbed-manager] 2025-02-10 08:43:14.899800 | orchestrator | 2025-02-10 08:43:14.899854 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-10 08:43:14.899871 | orchestrator | ok: [testbed-manager] 2025-02-10 08:43:15.044217 | orchestrator | 2025-02-10 08:43:15.044390 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:43:15.044680 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-02-10 08:43:15.044792 | orchestrator | 2025-02-10 08:43:15.332096 | orchestrator | changed 2025-02-10 08:43:15.351035 | 2025-02-10 08:43:15.351176 | TASK [Reboot manager] 2025-02-10 08:43:16.897702 | orchestrator | changed 2025-02-10 08:43:16.918671 | 2025-02-10 08:43:16.918828 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-10 08:43:31.064442 | orchestrator | ok 2025-02-10 08:43:31.077525 | 2025-02-10 08:43:31.077651 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-10 08:44:31.130911 | orchestrator | ok 2025-02-10 08:44:31.142637 | 2025-02-10 08:44:31.142760 | TASK [Deploy manager + bootstrap nodes] 2025-02-10 08:44:33.636935 | orchestrator | 2025-02-10 08:44:33.640114 | orchestrator | # DEPLOY MANAGER 2025-02-10 08:44:33.640218 | orchestrator | 2025-02-10 08:44:33.640269 | orchestrator | + set -e 2025-02-10 08:44:33.640321 | orchestrator | + echo 2025-02-10 08:44:33.640341 | orchestrator | + echo '# DEPLOY MANAGER' 2025-02-10 08:44:33.640358 | orchestrator | + echo 2025-02-10 08:44:33.640383 | orchestrator | + cat /opt/manager-vars.sh 2025-02-10 08:44:33.640422 | orchestrator | export NUMBER_OF_NODES=6 2025-02-10 08:44:33.641483 | orchestrator | 2025-02-10 08:44:33.641512 | orchestrator | export CEPH_VERSION=quincy 2025-02-10 08:44:33.641556 | orchestrator | export CONFIGURATION_VERSION=main 2025-02-10 08:44:33.641596 | orchestrator | export MANAGER_VERSION=latest 2025-02-10 08:44:33.641611 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-02-10 08:44:33.641626 | orchestrator | 2025-02-10 08:44:33.641641 | orchestrator | export ARA=false 2025-02-10 08:44:33.641656 | orchestrator | export TEMPEST=false 2025-02-10 08:44:33.641706 | orchestrator | export IS_ZUUL=true 2025-02-10 08:44:33.641721 | orchestrator | 2025-02-10 08:44:33.641735 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:44:33.641751 | orchestrator | export EXTERNAL_API=false 2025-02-10 08:44:33.641773 | orchestrator | 2025-02-10 08:44:33.641796 | orchestrator | export IMAGE_USER=ubuntu 2025-02-10 08:44:33.641818 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:33.641841 | orchestrator | 2025-02-10 08:44:33.641863 | orchestrator | export CEPH_STACK=ceph-ansible 2025-02-10 08:44:33.641886 | orchestrator | 2025-02-10 08:44:33.641908 | orchestrator | + echo 2025-02-10 08:44:33.641933 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 08:44:33.641962 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 08:44:33.699845 | orchestrator | ++ INTERACTIVE=false 2025-02-10 08:44:33.699948 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 08:44:33.699979 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 08:44:33.699995 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 08:44:33.700010 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 08:44:33.700023 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 08:44:33.700037 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 08:44:33.700051 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 08:44:33.700065 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 08:44:33.700081 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 08:44:33.700103 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 08:44:33.700118 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 08:44:33.700132 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 08:44:33.700146 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 08:44:33.700160 | orchestrator | ++ export ARA=false 2025-02-10 08:44:33.700174 | orchestrator | ++ ARA=false 2025-02-10 08:44:33.700188 | orchestrator | ++ export TEMPEST=false 2025-02-10 08:44:33.700201 | orchestrator | ++ TEMPEST=false 2025-02-10 08:44:33.700215 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 08:44:33.700229 | orchestrator | ++ IS_ZUUL=true 2025-02-10 08:44:33.700243 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:44:33.700257 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:44:33.700279 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 08:44:33.700294 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 08:44:33.700307 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 08:44:33.700321 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 08:44:33.700335 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:33.700349 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:33.700366 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 08:44:33.700380 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 08:44:33.700394 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-02-10 08:44:33.700434 | orchestrator | + docker version 2025-02-10 08:44:33.962188 | orchestrator | Client: Docker Engine - Community 2025-02-10 08:44:33.965236 | orchestrator | Version: 27.4.1 2025-02-10 08:44:33.965331 | orchestrator | API version: 1.47 2025-02-10 08:44:33.965353 | orchestrator | Go version: go1.22.10 2025-02-10 08:44:33.965374 | orchestrator | Git commit: b9d17ea 2025-02-10 08:44:33.965393 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-02-10 08:44:33.965416 | orchestrator | OS/Arch: linux/amd64 2025-02-10 08:44:33.965436 | orchestrator | Context: default 2025-02-10 08:44:33.965455 | orchestrator | 2025-02-10 08:44:33.965476 | orchestrator | Server: Docker Engine - Community 2025-02-10 08:44:33.965495 | orchestrator | Engine: 2025-02-10 08:44:33.965515 | orchestrator | Version: 27.4.1 2025-02-10 08:44:33.965567 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-02-10 08:44:33.965587 | orchestrator | Go version: go1.22.10 2025-02-10 08:44:33.965609 | orchestrator | Git commit: c710b88 2025-02-10 08:44:33.965672 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-02-10 08:44:33.965694 | orchestrator | OS/Arch: linux/amd64 2025-02-10 08:44:33.965713 | orchestrator | Experimental: false 2025-02-10 08:44:33.965733 | orchestrator | containerd: 2025-02-10 08:44:33.965753 | orchestrator | Version: 1.7.25 2025-02-10 08:44:33.965773 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-02-10 08:44:33.965795 | orchestrator | runc: 2025-02-10 08:44:33.965814 | orchestrator | Version: 1.2.4 2025-02-10 08:44:33.965834 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-02-10 08:44:33.965854 | orchestrator | docker-init: 2025-02-10 08:44:33.965874 | orchestrator | Version: 0.19.0 2025-02-10 08:44:33.965893 | orchestrator | GitCommit: de40ad0 2025-02-10 08:44:33.965926 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-02-10 08:44:33.974776 | orchestrator | + set -e 2025-02-10 08:44:33.974970 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 08:44:33.974991 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 08:44:33.975000 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 08:44:33.975009 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 08:44:33.975018 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 08:44:33.975027 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 08:44:33.975037 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 08:44:33.975046 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 08:44:33.975054 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 08:44:33.975064 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 08:44:33.975089 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 08:44:33.975098 | orchestrator | ++ export ARA=false 2025-02-10 08:44:33.975107 | orchestrator | ++ ARA=false 2025-02-10 08:44:33.975115 | orchestrator | ++ export TEMPEST=false 2025-02-10 08:44:33.975124 | orchestrator | ++ TEMPEST=false 2025-02-10 08:44:33.975132 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 08:44:33.975141 | orchestrator | ++ IS_ZUUL=true 2025-02-10 08:44:33.975150 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:44:33.975158 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:44:33.975167 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 08:44:33.975180 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 08:44:33.975189 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 08:44:33.975202 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 08:44:33.975216 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:33.975230 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 08:44:33.975250 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 08:44:33.975264 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 08:44:33.975278 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 08:44:33.975293 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 08:44:33.975307 | orchestrator | ++ INTERACTIVE=false 2025-02-10 08:44:33.975320 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 08:44:33.975336 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 08:44:33.975351 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-10 08:44:33.975375 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 08:44:33.980977 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-02-10 08:44:33.981055 | orchestrator | + set -e 2025-02-10 08:44:33.982142 | orchestrator | + VERSION=quincy 2025-02-10 08:44:33.982191 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:33.988405 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-02-10 08:44:33.993932 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:33.994007 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-02-10 08:44:33.999600 | orchestrator | + set -e 2025-02-10 08:44:34.000448 | orchestrator | + VERSION=2024.1 2025-02-10 08:44:34.000479 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:34.004391 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-02-10 08:44:34.010422 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-02-10 08:44:34.010504 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-02-10 08:44:34.011343 | orchestrator | ++ semver latest 7.0.0 2025-02-10 08:44:34.077403 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-10 08:44:34.115615 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 08:44:34.115754 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-02-10 08:44:34.115775 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-02-10 08:44:34.115848 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-10 08:44:34.117960 | orchestrator | + source /opt/venv/bin/activate 2025-02-10 08:44:34.118998 | orchestrator | ++ deactivate nondestructive 2025-02-10 08:44:34.119130 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:34.119167 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:34.119444 | orchestrator | ++ hash -r 2025-02-10 08:44:34.119481 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:34.119496 | orchestrator | ++ unset VIRTUAL_ENV 2025-02-10 08:44:34.119511 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-02-10 08:44:34.119569 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-02-10 08:44:34.119589 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-02-10 08:44:34.119628 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-02-10 08:44:34.119643 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-02-10 08:44:34.119657 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-02-10 08:44:34.119710 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:34.119740 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:44:34.119777 | orchestrator | ++ export PATH 2025-02-10 08:44:34.119793 | orchestrator | ++ '[' -n '' ']' 2025-02-10 08:44:34.119808 | orchestrator | ++ '[' -z '' ']' 2025-02-10 08:44:34.119838 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-02-10 08:44:34.119899 | orchestrator | ++ PS1='(venv) ' 2025-02-10 08:44:34.119915 | orchestrator | ++ export PS1 2025-02-10 08:44:34.119929 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-02-10 08:44:34.119943 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-02-10 08:44:34.119960 | orchestrator | ++ hash -r 2025-02-10 08:44:34.120123 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-02-10 08:44:35.420817 | orchestrator | 2025-02-10 08:44:35.989288 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-02-10 08:44:35.989419 | orchestrator | 2025-02-10 08:44:35.989435 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-10 08:44:35.989462 | orchestrator | ok: [testbed-manager] 2025-02-10 08:44:37.045078 | orchestrator | 2025-02-10 08:44:37.045272 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-10 08:44:37.045332 | orchestrator | changed: [testbed-manager] 2025-02-10 08:44:39.440368 | orchestrator | 2025-02-10 08:44:39.440595 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-02-10 08:44:39.440620 | orchestrator | 2025-02-10 08:44:39.440636 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:44:39.440672 | orchestrator | ok: [testbed-manager] 2025-02-10 08:44:44.887927 | orchestrator | 2025-02-10 08:44:44.888057 | orchestrator | TASK [Pull images] ************************************************************* 2025-02-10 08:44:44.888088 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ara-server:1.7.2) 2025-02-10 08:46:01.390200 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-02-10 08:46:01.390341 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ceph-ansible:quincy) 2025-02-10 08:46:01.390354 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/inventory-reconciler:latest) 2025-02-10 08:46:01.390363 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/kolla-ansible:2024.1) 2025-02-10 08:46:01.390372 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-02-10 08:46:01.390382 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/netbox:v4.1.10) 2025-02-10 08:46:01.390391 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-ansible:latest) 2025-02-10 08:46:01.390399 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism:latest) 2025-02-10 08:46:01.390408 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-netbox:latest) 2025-02-10 08:46:01.390416 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-02-10 08:46:01.390424 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.3) 2025-02-10 08:46:01.390432 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.4) 2025-02-10 08:46:01.390440 | orchestrator | 2025-02-10 08:46:01.390449 | orchestrator | TASK [Check status] ************************************************************ 2025-02-10 08:46:01.390496 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:01.390506 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-02-10 08:46:01.390555 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-02-10 08:46:01.390564 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-02-10 08:46:01.390573 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j718894528839.1537', 'results_file': '/home/dragon/.ansible_async/j718894528839.1537', 'changed': True, 'item': 'quay.io/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390595 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j403408068804.1562', 'results_file': '/home/dragon/.ansible_async/j403408068804.1562', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390606 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:01.390615 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j169021810763.1587', 'results_file': '/home/dragon/.ansible_async/j169021810763.1587', 'changed': True, 'item': 'quay.io/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390623 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j478457911664.1619', 'results_file': '/home/dragon/.ansible_async/j478457911664.1619', 'changed': True, 'item': 'quay.io/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390631 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:01.390639 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j61783593816.1652', 'results_file': '/home/dragon/.ansible_async/j61783593816.1652', 'changed': True, 'item': 'quay.io/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390647 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j206133358874.1684', 'results_file': '/home/dragon/.ansible_async/j206133358874.1684', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390658 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j179830637784.1716', 'results_file': '/home/dragon/.ansible_async/j179830637784.1716', 'changed': True, 'item': 'quay.io/osism/netbox:v4.1.10', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390666 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-10 08:46:01.390675 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j889034248161.1748', 'results_file': '/home/dragon/.ansible_async/j889034248161.1748', 'changed': True, 'item': 'quay.io/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390683 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j788259283290.1780', 'results_file': '/home/dragon/.ansible_async/j788259283290.1780', 'changed': True, 'item': 'quay.io/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390691 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j305716826440.1812', 'results_file': '/home/dragon/.ansible_async/j305716826440.1812', 'changed': True, 'item': 'quay.io/osism/osism-netbox:latest', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390699 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j894617807590.1850', 'results_file': '/home/dragon/.ansible_async/j894617807590.1850', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390713 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j410254737026.1884', 'results_file': '/home/dragon/.ansible_async/j410254737026.1884', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.3', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.390729 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j340857937472.1937', 'results_file': '/home/dragon/.ansible_async/j340857937472.1937', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.4', 'ansible_loop_var': 'item'}) 2025-02-10 08:46:01.443121 | orchestrator | 2025-02-10 08:46:01.443286 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-02-10 08:46:01.443338 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:01.982726 | orchestrator | 2025-02-10 08:46:01.982862 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-02-10 08:46:01.982898 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:02.349886 | orchestrator | 2025-02-10 08:46:02.350089 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-02-10 08:46:02.350132 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:02.707718 | orchestrator | 2025-02-10 08:46:02.707888 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-10 08:46:02.707941 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:02.747708 | orchestrator | 2025-02-10 08:46:02.747861 | orchestrator | TASK [Do not use Nexus for Ceph on CentOS] ************************************* 2025-02-10 08:46:02.747905 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:02.807197 | orchestrator | 2025-02-10 08:46:02.807303 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-02-10 08:46:02.807325 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:03.136661 | orchestrator | 2025-02-10 08:46:03.136777 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-02-10 08:46:03.136812 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:03.292082 | orchestrator | 2025-02-10 08:46:03.292213 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-02-10 08:46:03.292249 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:05.163286 | orchestrator | 2025-02-10 08:46:05.163420 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-02-10 08:46:05.163441 | orchestrator | 2025-02-10 08:46:05.163456 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:46:05.163509 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:05.368248 | orchestrator | 2025-02-10 08:46:05.368349 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-02-10 08:46:05.368373 | orchestrator | 2025-02-10 08:46:05.467652 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-02-10 08:46:05.467806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-02-10 08:46:06.710905 | orchestrator | 2025-02-10 08:46:06.711018 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-02-10 08:46:06.711054 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-02-10 08:46:08.679763 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-02-10 08:46:08.679896 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-02-10 08:46:08.679912 | orchestrator | 2025-02-10 08:46:08.679926 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-02-10 08:46:08.679956 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-02-10 08:46:09.387636 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-02-10 08:46:09.387747 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-02-10 08:46:09.387758 | orchestrator | 2025-02-10 08:46:09.387766 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-02-10 08:46:09.387787 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:10.072049 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:10.072195 | orchestrator | 2025-02-10 08:46:10.072258 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-02-10 08:46:10.072292 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:10.160446 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:10.160610 | orchestrator | 2025-02-10 08:46:10.160631 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-02-10 08:46:10.160666 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:10.541983 | orchestrator | 2025-02-10 08:46:10.542144 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-02-10 08:46:10.542171 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:10.644332 | orchestrator | 2025-02-10 08:46:10.644466 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-02-10 08:46:10.644505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-02-10 08:46:11.781000 | orchestrator | 2025-02-10 08:46:11.781173 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-02-10 08:46:11.781231 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:12.633478 | orchestrator | 2025-02-10 08:46:12.633676 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-02-10 08:46:12.633715 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:15.695044 | orchestrator | 2025-02-10 08:46:15.695215 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-02-10 08:46:15.695257 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:15.961069 | orchestrator | 2025-02-10 08:46:15.961221 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-02-10 08:46:15.961263 | orchestrator | 2025-02-10 08:46:16.094790 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-02-10 08:46:16.094954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:46:18.578757 | orchestrator | 2025-02-10 08:46:18.578911 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-02-10 08:46:18.578953 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:18.723735 | orchestrator | 2025-02-10 08:46:18.723890 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-10 08:46:18.723929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-02-10 08:46:19.946691 | orchestrator | 2025-02-10 08:46:19.946842 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-02-10 08:46:19.946883 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-02-10 08:46:20.047952 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-02-10 08:46:20.048096 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-02-10 08:46:20.048115 | orchestrator | 2025-02-10 08:46:20.048131 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-02-10 08:46:20.048167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-02-10 08:46:20.725843 | orchestrator | 2025-02-10 08:46:20.726066 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-02-10 08:46:20.726124 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-02-10 08:46:21.396989 | orchestrator | 2025-02-10 08:46:21.397132 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-10 08:46:21.397172 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:21.822858 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:21.822992 | orchestrator | 2025-02-10 08:46:21.823011 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-02-10 08:46:21.823043 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:22.197492 | orchestrator | 2025-02-10 08:46:22.197621 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-02-10 08:46:22.197643 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:22.255920 | orchestrator | 2025-02-10 08:46:22.256033 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-02-10 08:46:22.256083 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:22.894100 | orchestrator | 2025-02-10 08:46:22.894211 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-02-10 08:46:22.894238 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:23.003160 | orchestrator | 2025-02-10 08:46:23.003293 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-10 08:46:23.003334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-02-10 08:46:23.785928 | orchestrator | 2025-02-10 08:46:23.786126 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-02-10 08:46:23.786171 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-02-10 08:46:24.464454 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-02-10 08:46:24.464654 | orchestrator | 2025-02-10 08:46:24.464675 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-02-10 08:46:24.464709 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-02-10 08:46:25.148893 | orchestrator | 2025-02-10 08:46:25.149034 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-02-10 08:46:25.149076 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:25.211754 | orchestrator | 2025-02-10 08:46:25.211898 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-02-10 08:46:25.211955 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:46:25.879969 | orchestrator | 2025-02-10 08:46:25.880107 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-02-10 08:46:25.880137 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:27.797220 | orchestrator | 2025-02-10 08:46:27.797378 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-10 08:46:27.797432 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:33.977966 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:33.978174 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:46:33.978195 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:33.978214 | orchestrator | 2025-02-10 08:46:33.978230 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-02-10 08:46:33.978265 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-02-10 08:46:34.642688 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-02-10 08:46:34.642834 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-02-10 08:46:34.642856 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-02-10 08:46:34.642871 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-02-10 08:46:34.642887 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-02-10 08:46:34.642902 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-02-10 08:46:34.642917 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-02-10 08:46:34.642931 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-02-10 08:46:34.642945 | orchestrator | changed: [testbed-manager] => (item=users) 2025-02-10 08:46:34.642960 | orchestrator | 2025-02-10 08:46:34.642975 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-02-10 08:46:34.643010 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-02-10 08:46:34.826336 | orchestrator | 2025-02-10 08:46:34.826474 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-02-10 08:46:34.826587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-02-10 08:46:35.559552 | orchestrator | 2025-02-10 08:46:35.559665 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-02-10 08:46:35.559687 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:36.207954 | orchestrator | 2025-02-10 08:46:36.208092 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-02-10 08:46:36.208140 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:36.979617 | orchestrator | 2025-02-10 08:46:36.979766 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-02-10 08:46:36.979818 | orchestrator | changed: [testbed-manager] 2025-02-10 08:46:39.526391 | orchestrator | 2025-02-10 08:46:39.526584 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-02-10 08:46:39.526639 | orchestrator | ok: [testbed-manager] 2025-02-10 08:46:40.508009 | orchestrator | 2025-02-10 08:46:40.508136 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-02-10 08:46:40.508168 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:02.754838 | orchestrator | 2025-02-10 08:47:02.754981 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-02-10 08:47:02.755012 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-02-10 08:47:02.843414 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:02.843572 | orchestrator | 2025-02-10 08:47:02.843598 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-02-10 08:47:02.843643 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:02.891680 | orchestrator | 2025-02-10 08:47:02.891834 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-02-10 08:47:02.891854 | orchestrator | 2025-02-10 08:47:02.891871 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-02-10 08:47:02.891904 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:02.985433 | orchestrator | 2025-02-10 08:47:02.985607 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-10 08:47:02.985648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-02-10 08:47:03.880462 | orchestrator | 2025-02-10 08:47:03.880699 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-02-10 08:47:03.880758 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:03.978669 | orchestrator | 2025-02-10 08:47:03.978811 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-02-10 08:47:03.978842 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:04.052835 | orchestrator | 2025-02-10 08:47:04.052984 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-02-10 08:47:04.053025 | orchestrator | ok: [testbed-manager] => { 2025-02-10 08:47:04.710595 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-02-10 08:47:04.710748 | orchestrator | } 2025-02-10 08:47:04.710768 | orchestrator | 2025-02-10 08:47:04.710786 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-02-10 08:47:04.710819 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:05.645147 | orchestrator | 2025-02-10 08:47:05.645321 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-02-10 08:47:05.645363 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:05.730780 | orchestrator | 2025-02-10 08:47:05.730919 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-02-10 08:47:05.730958 | orchestrator | ok: [testbed-manager] 2025-02-10 08:47:05.786726 | orchestrator | 2025-02-10 08:47:05.786852 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-02-10 08:47:05.786890 | orchestrator | ok: [testbed-manager] => { 2025-02-10 08:47:05.851632 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-02-10 08:47:05.851796 | orchestrator | } 2025-02-10 08:47:05.851823 | orchestrator | 2025-02-10 08:47:05.851841 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-02-10 08:47:05.851881 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:05.917576 | orchestrator | 2025-02-10 08:47:05.917694 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-02-10 08:47:05.917718 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:05.978466 | orchestrator | 2025-02-10 08:47:05.978657 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-02-10 08:47:05.978696 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:06.046059 | orchestrator | 2025-02-10 08:47:06.046190 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-02-10 08:47:06.046220 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:06.113761 | orchestrator | 2025-02-10 08:47:06.113897 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-02-10 08:47:06.113931 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:06.188723 | orchestrator | 2025-02-10 08:47:06.188856 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-02-10 08:47:06.188908 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:47:07.792377 | orchestrator | 2025-02-10 08:47:07.792576 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-10 08:47:07.792621 | orchestrator | changed: [testbed-manager] 2025-02-10 08:47:07.926592 | orchestrator | 2025-02-10 08:47:07.926772 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-02-10 08:47:07.926833 | orchestrator | ok: [testbed-manager] 2025-02-10 08:48:07.997394 | orchestrator | 2025-02-10 08:48:07.997626 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-02-10 08:48:07.997671 | orchestrator | Pausing for 60 seconds 2025-02-10 08:48:08.098279 | orchestrator | changed: [testbed-manager] 2025-02-10 08:48:08.098407 | orchestrator | 2025-02-10 08:48:08.098422 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-02-10 08:48:08.098450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-02-10 08:51:16.980902 | orchestrator | 2025-02-10 08:51:16.981071 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-02-10 08:51:16.981112 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-02-10 08:51:18.851394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-02-10 08:51:18.851604 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-02-10 08:51:18.851626 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-02-10 08:51:18.851642 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-02-10 08:51:18.851657 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-02-10 08:51:18.851672 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-02-10 08:51:18.851686 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-02-10 08:51:18.851701 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-02-10 08:51:18.851715 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-02-10 08:51:18.851730 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-02-10 08:51:18.851744 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-02-10 08:51:18.851758 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-02-10 08:51:18.851772 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-02-10 08:51:18.851785 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-02-10 08:51:18.851799 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-02-10 08:51:18.851813 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-02-10 08:51:18.851827 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-02-10 08:51:18.851841 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:18.851857 | orchestrator | 2025-02-10 08:51:18.851873 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-02-10 08:51:18.851925 | orchestrator | 2025-02-10 08:51:18.851956 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:51:18.851991 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:18.957796 | orchestrator | 2025-02-10 08:51:18.957943 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-02-10 08:51:18.957977 | orchestrator | 2025-02-10 08:51:19.033956 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-02-10 08:51:19.034157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:51:20.735682 | orchestrator | 2025-02-10 08:51:20.735814 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-02-10 08:51:20.735851 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:20.795707 | orchestrator | 2025-02-10 08:51:20.795835 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-02-10 08:51:20.795871 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:20.896386 | orchestrator | 2025-02-10 08:51:20.896617 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-02-10 08:51:20.896670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-02-10 08:51:23.803677 | orchestrator | 2025-02-10 08:51:23.803836 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-02-10 08:51:23.803896 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-02-10 08:51:24.469945 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-02-10 08:51:24.470141 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-02-10 08:51:24.470161 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-02-10 08:51:24.470174 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-02-10 08:51:24.470187 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-02-10 08:51:24.470199 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-02-10 08:51:24.470211 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-02-10 08:51:24.470226 | orchestrator | 2025-02-10 08:51:24.470239 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-02-10 08:51:24.470268 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:24.577523 | orchestrator | 2025-02-10 08:51:24.577679 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-02-10 08:51:24.577723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-02-10 08:51:25.854616 | orchestrator | 2025-02-10 08:51:25.854742 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-02-10 08:51:25.854774 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-02-10 08:51:26.489869 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-02-10 08:51:26.490126 | orchestrator | 2025-02-10 08:51:26.490161 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-02-10 08:51:26.490209 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:26.563574 | orchestrator | 2025-02-10 08:51:26.563725 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-02-10 08:51:26.563767 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:51:26.633308 | orchestrator | 2025-02-10 08:51:26.633446 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-02-10 08:51:26.633483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-02-10 08:51:28.031428 | orchestrator | 2025-02-10 08:51:28.031633 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-02-10 08:51:28.031696 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:51:28.668356 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:51:28.668547 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:28.668572 | orchestrator | 2025-02-10 08:51:28.668589 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-02-10 08:51:28.668623 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:28.757289 | orchestrator | 2025-02-10 08:51:28.757439 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-02-10 08:51:28.757521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-02-10 08:51:29.412742 | orchestrator | 2025-02-10 08:51:29.412880 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-02-10 08:51:29.412918 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 08:51:30.054114 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:30.054300 | orchestrator | 2025-02-10 08:51:30.054339 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-02-10 08:51:30.054395 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:30.153462 | orchestrator | 2025-02-10 08:51:30.153645 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-02-10 08:51:30.153680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-02-10 08:51:30.783389 | orchestrator | 2025-02-10 08:51:30.783621 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-02-10 08:51:30.783680 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:31.208354 | orchestrator | 2025-02-10 08:51:31.208465 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-02-10 08:51:31.208513 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:32.473772 | orchestrator | 2025-02-10 08:51:32.473918 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-02-10 08:51:32.473958 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-02-10 08:51:33.134297 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-02-10 08:51:33.134448 | orchestrator | 2025-02-10 08:51:33.134469 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-02-10 08:51:33.134567 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:33.451664 | orchestrator | 2025-02-10 08:51:33.451811 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-02-10 08:51:33.451850 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:33.500094 | orchestrator | 2025-02-10 08:51:33.500223 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-02-10 08:51:33.500260 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:51:34.093299 | orchestrator | 2025-02-10 08:51:34.093478 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-02-10 08:51:34.093569 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:34.169127 | orchestrator | 2025-02-10 08:51:34.169252 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-02-10 08:51:34.169287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-02-10 08:51:34.217470 | orchestrator | 2025-02-10 08:51:34.217639 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-02-10 08:51:34.217674 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:36.101153 | orchestrator | 2025-02-10 08:51:36.101277 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-02-10 08:51:36.101307 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-02-10 08:51:36.799562 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-02-10 08:51:36.799694 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-02-10 08:51:36.799710 | orchestrator | 2025-02-10 08:51:36.799723 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-02-10 08:51:36.799752 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:36.864998 | orchestrator | 2025-02-10 08:51:36.865135 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-02-10 08:51:36.865175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-02-10 08:51:36.922442 | orchestrator | 2025-02-10 08:51:36.922632 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-02-10 08:51:36.922730 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:37.579387 | orchestrator | 2025-02-10 08:51:37.579652 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-02-10 08:51:37.579700 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-02-10 08:51:37.657911 | orchestrator | 2025-02-10 08:51:37.658124 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-02-10 08:51:37.658162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-02-10 08:51:38.391346 | orchestrator | 2025-02-10 08:51:38.391483 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-02-10 08:51:38.391609 | orchestrator | changed: [testbed-manager] 2025-02-10 08:51:39.038179 | orchestrator | 2025-02-10 08:51:39.038348 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-02-10 08:51:39.038384 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:39.094733 | orchestrator | 2025-02-10 08:51:39.094850 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-02-10 08:51:39.094875 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:51:39.156191 | orchestrator | 2025-02-10 08:51:39.156311 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-02-10 08:51:39.156378 | orchestrator | ok: [testbed-manager] 2025-02-10 08:51:40.006308 | orchestrator | 2025-02-10 08:51:40.006567 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-02-10 08:51:40.006624 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:05.116820 | orchestrator | 2025-02-10 08:52:05.116982 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-02-10 08:52:05.117025 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:05.782404 | orchestrator | 2025-02-10 08:52:05.782535 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-02-10 08:52:05.782558 | orchestrator | ok: [testbed-manager] 2025-02-10 08:52:09.886705 | orchestrator | 2025-02-10 08:52:09.886792 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-02-10 08:52:09.886815 | orchestrator | changed: [testbed-manager] 2025-02-10 08:52:09.948854 | orchestrator | 2025-02-10 08:52:09.948962 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-02-10 08:52:09.948988 | orchestrator | ok: [testbed-manager] 2025-02-10 08:52:10.024114 | orchestrator | 2025-02-10 08:52:10.024220 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-10 08:52:10.024238 | orchestrator | 2025-02-10 08:52:10.024253 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-02-10 08:52:10.024282 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:53:10.083976 | orchestrator | 2025-02-10 08:53:10.084153 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-02-10 08:53:10.084193 | orchestrator | Pausing for 60 seconds 2025-02-10 08:53:11.727561 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:11.727682 | orchestrator | 2025-02-10 08:53:11.727696 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-02-10 08:53:11.727721 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:32.876738 | orchestrator | 2025-02-10 08:53:32.876908 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-02-10 08:53:32.876944 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-02-10 08:53:37.823770 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:37.823918 | orchestrator | 2025-02-10 08:53:37.823942 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-02-10 08:53:37.823979 | orchestrator | changed: [testbed-manager] 2025-02-10 08:53:37.922226 | orchestrator | 2025-02-10 08:53:37.922382 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-02-10 08:53:37.922423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-02-10 08:53:37.993329 | orchestrator | 2025-02-10 08:53:37.993530 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-10 08:53:37.993561 | orchestrator | 2025-02-10 08:53:37.993630 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-02-10 08:53:37.993677 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:53:38.130401 | orchestrator | 2025-02-10 08:53:38.130527 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:53:38.130537 | orchestrator | testbed-manager : ok=103 changed=54 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 2025-02-10 08:53:38.130544 | orchestrator | 2025-02-10 08:53:38.130563 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-10 08:53:38.138170 | orchestrator | + deactivate 2025-02-10 08:53:38.138182 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-02-10 08:53:38.138189 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-10 08:53:38.138195 | orchestrator | + export PATH 2025-02-10 08:53:38.138200 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-02-10 08:53:38.138206 | orchestrator | + '[' -n '' ']' 2025-02-10 08:53:38.138211 | orchestrator | + hash -r 2025-02-10 08:53:38.138217 | orchestrator | + '[' -n '' ']' 2025-02-10 08:53:38.138223 | orchestrator | + unset VIRTUAL_ENV 2025-02-10 08:53:38.138228 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-02-10 08:53:38.138234 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-02-10 08:53:38.138239 | orchestrator | + unset -f deactivate 2025-02-10 08:53:38.138245 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-02-10 08:53:38.138253 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-10 08:53:38.138943 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-10 08:53:38.138953 | orchestrator | + local max_attempts=60 2025-02-10 08:53:38.138959 | orchestrator | + local name=ceph-ansible 2025-02-10 08:53:38.138965 | orchestrator | + local attempt_num=1 2025-02-10 08:53:38.138973 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-10 08:53:38.174825 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:53:38.176169 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-10 08:53:38.176191 | orchestrator | + local max_attempts=60 2025-02-10 08:53:38.176203 | orchestrator | + local name=kolla-ansible 2025-02-10 08:53:38.176216 | orchestrator | + local attempt_num=1 2025-02-10 08:53:38.176233 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-10 08:53:38.205263 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:53:38.205543 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-10 08:53:38.205568 | orchestrator | + local max_attempts=60 2025-02-10 08:53:38.205581 | orchestrator | + local name=osism-ansible 2025-02-10 08:53:38.205594 | orchestrator | + local attempt_num=1 2025-02-10 08:53:38.205611 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-10 08:53:38.241569 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:53:38.960038 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-10 08:53:38.960163 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-10 08:53:38.960202 | orchestrator | ++ semver latest 8.0.0 2025-02-10 08:53:39.014209 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-10 08:53:39.015007 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 08:53:39.015041 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-02-10 08:53:39.015059 | orchestrator | + local max_attempts=60 2025-02-10 08:53:39.015074 | orchestrator | + local name=netbox-netbox-1 2025-02-10 08:53:39.015089 | orchestrator | + local attempt_num=1 2025-02-10 08:53:39.015109 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-02-10 08:53:39.050897 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 08:53:39.058395 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-02-10 08:53:39.058575 | orchestrator | + set -e 2025-02-10 08:53:40.648396 | orchestrator | + osism netbox import 2025-02-10 08:53:40.648613 | orchestrator | 2025-02-10 08:53:40 | INFO  | Task a7535a40-e2e0-49cb-ab9d-60eefe892a9b is running. Wait. No more output. 2025-02-10 08:53:49.069738 | orchestrator | + osism netbox init 2025-02-10 08:53:50.394388 | orchestrator | 2025-02-10 08:53:50 | INFO  | Task d58bb67c-4e30-431b-89e5-01cf30149cce was prepared for execution. 2025-02-10 08:53:52.036087 | orchestrator | 2025-02-10 08:53:50 | INFO  | It takes a moment until task d58bb67c-4e30-431b-89e5-01cf30149cce has been started and output is visible here. 2025-02-10 08:53:52.036299 | orchestrator | 2025-02-10 08:53:52.946874 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-02-10 08:53:52.947014 | orchestrator | 2025-02-10 08:53:52.947035 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-02-10 08:53:52.947071 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-02-10 08:53:52.947211 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-02-10 08:53:52.947232 | orchestrator | Python interpreter could change the meaning of that path. See 2025-02-10 08:53:52.947252 | orchestrator | https://docs.ansible.com/ansible- 2025-02-10 08:53:52.948027 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-02-10 08:53:52.956833 | orchestrator | ok: [localhost] 2025-02-10 08:53:52.957545 | orchestrator | 2025-02-10 08:53:52.957699 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-02-10 08:53:52.958751 | orchestrator | 2025-02-10 08:53:52.959062 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-02-10 08:53:54.530952 | orchestrator | changed: [localhost] 2025-02-10 08:53:54.531340 | orchestrator | 2025-02-10 08:53:54.531383 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-02-10 08:53:55.860450 | orchestrator | changed: [localhost] 2025-02-10 08:53:55.860992 | orchestrator | 2025-02-10 08:53:55.861293 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-02-10 08:53:55.861354 | orchestrator | 2025-02-10 08:53:57.244558 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-02-10 08:53:57.244816 | orchestrator | changed: [localhost] 2025-02-10 08:53:57.244849 | orchestrator | 2025-02-10 08:53:57.244870 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-02-10 08:53:58.449644 | orchestrator | changed: [localhost] 2025-02-10 08:53:58.449855 | orchestrator | 2025-02-10 08:53:58.449877 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-02-10 08:53:58.449910 | orchestrator | 2025-02-10 08:53:58.450546 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-02-10 08:53:59.733415 | orchestrator | changed: [localhost] 2025-02-10 08:53:59.733746 | orchestrator | 2025-02-10 08:54:00.872096 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-02-10 08:54:00.872260 | orchestrator | changed: [localhost] 2025-02-10 08:54:00.872859 | orchestrator | 2025-02-10 08:54:00.872905 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:54:00.873747 | orchestrator | 2025-02-10 08:54:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:54:00.874745 | orchestrator | 2025-02-10 08:54:00 | INFO  | Please wait and do not abort execution. 2025-02-10 08:54:00.874793 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 08:54:00.875673 | orchestrator | 2025-02-10 08:54:01.095250 | orchestrator | + osism netbox manage 1000 2025-02-10 08:54:02.405558 | orchestrator | 2025-02-10 08:54:02 | INFO  | Task 86b1de95-900a-420e-bf25-75cef91111f3 was prepared for execution. 2025-02-10 08:54:04.105530 | orchestrator | 2025-02-10 08:54:02 | INFO  | It takes a moment until task 86b1de95-900a-420e-bf25-75cef91111f3 has been started and output is visible here. 2025-02-10 08:54:04.105690 | orchestrator | 2025-02-10 08:54:04.105925 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-02-10 08:54:04.106840 | orchestrator | 2025-02-10 08:54:04.107541 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-02-10 08:54:05.748169 | orchestrator | changed: [localhost] 2025-02-10 08:54:12.171628 | orchestrator | 2025-02-10 08:54:12.171785 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-02-10 08:54:12.171824 | orchestrator | changed: [localhost] 2025-02-10 08:54:18.173603 | orchestrator | 2025-02-10 08:54:18.173753 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-02-10 08:54:18.173793 | orchestrator | changed: [localhost] 2025-02-10 08:54:24.287268 | orchestrator | 2025-02-10 08:54:24.287419 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-02-10 08:54:24.287459 | orchestrator | changed: [localhost] 2025-02-10 08:54:24.287637 | orchestrator | 2025-02-10 08:54:24.287665 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-02-10 08:54:26.911997 | orchestrator | changed: [localhost] 2025-02-10 08:54:29.165466 | orchestrator | 2025-02-10 08:54:29.165618 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-02-10 08:54:29.165647 | orchestrator | changed: [localhost] 2025-02-10 08:54:31.418953 | orchestrator | 2025-02-10 08:54:31.419114 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-02-10 08:54:31.419154 | orchestrator | changed: [localhost] 2025-02-10 08:54:33.632735 | orchestrator | 2025-02-10 08:54:33.632885 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-02-10 08:54:33.632927 | orchestrator | changed: [localhost] 2025-02-10 08:54:35.935901 | orchestrator | 2025-02-10 08:54:35.936068 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-02-10 08:54:35.936105 | orchestrator | changed: [localhost] 2025-02-10 08:54:35.937271 | orchestrator | 2025-02-10 08:54:38.176997 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-02-10 08:54:38.177203 | orchestrator | changed: [localhost] 2025-02-10 08:54:38.177550 | orchestrator | 2025-02-10 08:54:38.177607 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-02-10 08:54:40.443445 | orchestrator | changed: [localhost] 2025-02-10 08:54:40.443759 | orchestrator | 2025-02-10 08:54:40.443790 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-02-10 08:54:43.071758 | orchestrator | changed: [localhost] 2025-02-10 08:54:43.074310 | orchestrator | 2025-02-10 08:54:43.074401 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-02-10 08:54:45.347297 | orchestrator | changed: [localhost] 2025-02-10 08:54:45.348245 | orchestrator | 2025-02-10 08:54:45.348757 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-02-10 08:54:47.562225 | orchestrator | changed: [localhost] 2025-02-10 08:54:47.562532 | orchestrator | 2025-02-10 08:54:47.562573 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-02-10 08:54:49.831464 | orchestrator | changed: [localhost] 2025-02-10 08:54:49.832294 | orchestrator | 2025-02-10 08:54:49.832404 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:54:49.832443 | orchestrator | 2025-02-10 08:54:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:54:49.834084 | orchestrator | 2025-02-10 08:54:49 | INFO  | Please wait and do not abort execution. 2025-02-10 08:54:49.834170 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 08:54:49.834441 | orchestrator | 2025-02-10 08:54:50.163458 | orchestrator | + osism netbox connect 1000 --state a 2025-02-10 08:54:51.647129 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 171e5c6b-f085-495e-bd7f-f90d6ddd5f5a for device testbed-node-7 is running in background 2025-02-10 08:54:51.650866 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task d1192d61-6e7d-4893-a275-a45f864c702b for device testbed-node-8 is running in background 2025-02-10 08:54:51.652605 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task d9308666-6ada-44bb-b21e-ee42546f0eaa for device testbed-switch-1 is running in background 2025-02-10 08:54:51.655456 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 61ca064a-7181-46e9-9d1b-d4a71672d89f for device testbed-node-9 is running in background 2025-02-10 08:54:51.658671 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 3bcd3248-f7a4-4a44-b5a4-56f227ac28f9 for device testbed-node-3 is running in background 2025-02-10 08:54:51.660259 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 0156f903-1dc6-43f6-854a-5e0ec81e9b2d for device testbed-node-2 is running in background 2025-02-10 08:54:51.662555 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 905676f9-9334-4a67-ba44-2fc23f8ad390 for device testbed-node-5 is running in background 2025-02-10 08:54:51.665192 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 0ae69603-229e-490b-9f07-0ac3b54fe7d3 for device testbed-node-4 is running in background 2025-02-10 08:54:51.667119 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 9c6b3bce-fff4-4717-acd8-f2b79e336d43 for device testbed-manager is running in background 2025-02-10 08:54:51.669724 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 740d676d-7e09-454e-9329-53e9ecd23511 for device testbed-switch-0 is running in background 2025-02-10 08:54:51.672856 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task efc8200f-45e9-4847-81fe-98d29feb73fd for device testbed-switch-2 is running in background 2025-02-10 08:54:51.675424 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task c89d48c8-f3b8-4f41-952e-705e112ac08c for device testbed-node-6 is running in background 2025-02-10 08:54:51.679003 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task 34f3ec00-b768-4ff2-b217-f316ccec9257 for device testbed-node-0 is running in background 2025-02-10 08:54:51.679678 | orchestrator | 2025-02-10 08:54:51 | INFO  | Task f1726ae8-ca04-4a10-a2cd-d0dd9338d22f for device testbed-node-1 is running in background 2025-02-10 08:54:51.901173 | orchestrator | 2025-02-10 08:54:51 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-02-10 08:54:51.901326 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-02-10 08:54:53.585527 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-02-10 08:54:55.222073 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-02-10 08:54:56.885674 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-02-10 08:54:57.129840 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-10 08:54:57.135172 | orchestrator | ceph-ansible quay.io/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135249 | orchestrator | kolla-ansible quay.io/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135265 | orchestrator | manager-api-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-02-10 08:54:57.135282 | orchestrator | manager-ara-server-1 quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-02-10 08:54:57.135297 | orchestrator | manager-beat-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" beat 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135311 | orchestrator | manager-conductor-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" conductor 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135326 | orchestrator | manager-flower-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" flower 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135341 | orchestrator | manager-inventory_reconciler-1 quay.io/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135360 | orchestrator | manager-listener-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" listener 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135383 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-02-10 08:54:57.135450 | orchestrator | manager-netbox-1 quay.io/osism/osism-netbox:latest "/usr/bin/tini -- os…" netbox 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135520 | orchestrator | manager-openstack-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135546 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-02-10 08:54:57.135564 | orchestrator | manager-watchdog-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" watchdog 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135579 | orchestrator | osism-ansible quay.io/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135593 | orchestrator | osism-kubernetes quay.io/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135608 | orchestrator | osismclient quay.io/osism/osism:latest "/usr/bin/tini -- sl…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-02-10 08:54:57.135636 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-02-10 08:54:57.312835 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-10 08:54:57.321545 | orchestrator | netbox-netbox-1 quay.io/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-02-10 08:54:57.321647 | orchestrator | netbox-netbox-worker-1 quay.io/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-02-10 08:54:57.321661 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-02-10 08:54:57.321674 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-02-10 08:54:57.321700 | orchestrator | ++ semver latest 7.0.0 2025-02-10 08:54:57.373588 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-10 08:54:57.377877 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 08:54:57.377957 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-02-10 08:54:57.377991 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-02-10 08:54:58.901163 | orchestrator | 2025-02-10 08:54:58 | INFO  | Task 246e86f0-0a1c-417e-bdca-4c56675af739 (resolvconf) was prepared for execution. 2025-02-10 08:55:01.310100 | orchestrator | 2025-02-10 08:54:58 | INFO  | It takes a moment until task 246e86f0-0a1c-417e-bdca-4c56675af739 (resolvconf) has been started and output is visible here. 2025-02-10 08:55:01.310275 | orchestrator | 2025-02-10 08:55:01.311837 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-02-10 08:55:01.311962 | orchestrator | 2025-02-10 08:55:01.312413 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:55:01.312618 | orchestrator | Monday 10 February 2025 08:55:01 +0000 (0:00:00.068) 0:00:00.068 ******* 2025-02-10 08:55:04.633385 | orchestrator | ok: [testbed-manager] 2025-02-10 08:55:04.692409 | orchestrator | 2025-02-10 08:55:04.692657 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-10 08:55:04.692682 | orchestrator | Monday 10 February 2025 08:55:04 +0000 (0:00:03.321) 0:00:03.389 ******* 2025-02-10 08:55:04.692717 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:04.766322 | orchestrator | 2025-02-10 08:55:04.766447 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-10 08:55:04.766541 | orchestrator | Monday 10 February 2025 08:55:04 +0000 (0:00:00.059) 0:00:03.449 ******* 2025-02-10 08:55:04.766572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-02-10 08:55:04.841874 | orchestrator | 2025-02-10 08:55:04.842080 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-10 08:55:04.842106 | orchestrator | Monday 10 February 2025 08:55:04 +0000 (0:00:00.074) 0:00:03.524 ******* 2025-02-10 08:55:04.842143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:55:04.842853 | orchestrator | 2025-02-10 08:55:04.842892 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-10 08:55:04.843001 | orchestrator | Monday 10 February 2025 08:55:04 +0000 (0:00:00.075) 0:00:03.600 ******* 2025-02-10 08:55:05.756230 | orchestrator | ok: [testbed-manager] 2025-02-10 08:55:05.802976 | orchestrator | 2025-02-10 08:55:05.803119 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-10 08:55:05.803139 | orchestrator | Monday 10 February 2025 08:55:05 +0000 (0:00:00.911) 0:00:04.511 ******* 2025-02-10 08:55:05.803174 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:05.803831 | orchestrator | 2025-02-10 08:55:05.803870 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-10 08:55:06.265173 | orchestrator | Monday 10 February 2025 08:55:05 +0000 (0:00:00.051) 0:00:04.563 ******* 2025-02-10 08:55:06.265364 | orchestrator | ok: [testbed-manager] 2025-02-10 08:55:06.268725 | orchestrator | 2025-02-10 08:55:06.329391 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-10 08:55:06.329587 | orchestrator | Monday 10 February 2025 08:55:06 +0000 (0:00:00.454) 0:00:05.017 ******* 2025-02-10 08:55:06.329626 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:06.810738 | orchestrator | 2025-02-10 08:55:06.810896 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-10 08:55:06.810920 | orchestrator | Monday 10 February 2025 08:55:06 +0000 (0:00:00.069) 0:00:05.087 ******* 2025-02-10 08:55:06.810957 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:06.811749 | orchestrator | 2025-02-10 08:55:06.815109 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-10 08:55:06.815146 | orchestrator | Monday 10 February 2025 08:55:06 +0000 (0:00:00.479) 0:00:05.567 ******* 2025-02-10 08:55:07.701364 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:08.531134 | orchestrator | 2025-02-10 08:55:08.531267 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-10 08:55:08.531282 | orchestrator | Monday 10 February 2025 08:55:07 +0000 (0:00:00.890) 0:00:06.458 ******* 2025-02-10 08:55:08.531310 | orchestrator | ok: [testbed-manager] 2025-02-10 08:55:08.600329 | orchestrator | 2025-02-10 08:55:08.600462 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-10 08:55:08.600620 | orchestrator | Monday 10 February 2025 08:55:08 +0000 (0:00:00.828) 0:00:07.286 ******* 2025-02-10 08:55:08.600655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-02-10 08:55:08.600789 | orchestrator | 2025-02-10 08:55:08.600811 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-10 08:55:08.601071 | orchestrator | Monday 10 February 2025 08:55:08 +0000 (0:00:00.071) 0:00:07.358 ******* 2025-02-10 08:55:09.587219 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:09.588161 | orchestrator | 2025-02-10 08:55:09.588425 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:55:09.588464 | orchestrator | 2025-02-10 08:55:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:55:09.588644 | orchestrator | 2025-02-10 08:55:09 | INFO  | Please wait and do not abort execution. 2025-02-10 08:55:09.588938 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 08:55:09.593988 | orchestrator | 2025-02-10 08:55:09.596983 | orchestrator | 2025-02-10 08:55:09.597053 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 08:55:09.597208 | orchestrator | Monday 10 February 2025 08:55:09 +0000 (0:00:00.986) 0:00:08.344 ******* 2025-02-10 08:55:09.598525 | orchestrator | =============================================================================== 2025-02-10 08:55:09.598573 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2025-02-10 08:55:09.601299 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 0.99s 2025-02-10 08:55:09.603886 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.91s 2025-02-10 08:55:09.604077 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.89s 2025-02-10 08:55:09.604103 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.83s 2025-02-10 08:55:09.604119 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2025-02-10 08:55:09.604136 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.45s 2025-02-10 08:55:09.604151 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-02-10 08:55:09.604166 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-02-10 08:55:09.604188 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-02-10 08:55:09.604303 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-02-10 08:55:09.604590 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-02-10 08:55:09.606627 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-02-10 08:55:09.898910 | orchestrator | + osism apply sshconfig 2025-02-10 08:55:11.182561 | orchestrator | 2025-02-10 08:55:11 | INFO  | Task 8668f018-22bb-4fc6-b49e-8d29afc19879 (sshconfig) was prepared for execution. 2025-02-10 08:55:14.048700 | orchestrator | 2025-02-10 08:55:11 | INFO  | It takes a moment until task 8668f018-22bb-4fc6-b49e-8d29afc19879 (sshconfig) has been started and output is visible here. 2025-02-10 08:55:14.048871 | orchestrator | 2025-02-10 08:55:14.492923 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-02-10 08:55:14.493166 | orchestrator | 2025-02-10 08:55:14.493208 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-02-10 08:55:14.493224 | orchestrator | Monday 10 February 2025 08:55:14 +0000 (0:00:00.086) 0:00:00.086 ******* 2025-02-10 08:55:14.493257 | orchestrator | ok: [testbed-manager] 2025-02-10 08:55:14.934094 | orchestrator | 2025-02-10 08:55:14.934227 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-02-10 08:55:14.934244 | orchestrator | Monday 10 February 2025 08:55:14 +0000 (0:00:00.447) 0:00:00.533 ******* 2025-02-10 08:55:14.934273 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:14.935711 | orchestrator | 2025-02-10 08:55:14.935766 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-02-10 08:55:14.935791 | orchestrator | Monday 10 February 2025 08:55:14 +0000 (0:00:00.440) 0:00:00.973 ******* 2025-02-10 08:55:19.688407 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-02-10 08:55:19.737412 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-02-10 08:55:19.737645 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-02-10 08:55:19.737684 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-02-10 08:55:19.737701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-10 08:55:19.737738 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-02-10 08:55:19.737781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-02-10 08:55:19.737796 | orchestrator | 2025-02-10 08:55:19.737812 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-02-10 08:55:19.737827 | orchestrator | Monday 10 February 2025 08:55:19 +0000 (0:00:04.752) 0:00:05.726 ******* 2025-02-10 08:55:19.737860 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:19.739034 | orchestrator | 2025-02-10 08:55:19.739096 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-02-10 08:55:20.185741 | orchestrator | Monday 10 February 2025 08:55:19 +0000 (0:00:00.050) 0:00:05.777 ******* 2025-02-10 08:55:20.185909 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:20.187963 | orchestrator | 2025-02-10 08:55:20.188018 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:55:20.188029 | orchestrator | 2025-02-10 08:55:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:55:20.188040 | orchestrator | 2025-02-10 08:55:20 | INFO  | Please wait and do not abort execution. 2025-02-10 08:55:20.188059 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 08:55:20.190714 | orchestrator | 2025-02-10 08:55:20.505201 | orchestrator | 2025-02-10 08:55:20.505300 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 08:55:20.505309 | orchestrator | Monday 10 February 2025 08:55:20 +0000 (0:00:00.447) 0:00:06.224 ******* 2025-02-10 08:55:20.505315 | orchestrator | =============================================================================== 2025-02-10 08:55:20.505320 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.75s 2025-02-10 08:55:20.505325 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.45s 2025-02-10 08:55:20.505330 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.45s 2025-02-10 08:55:20.505335 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.44s 2025-02-10 08:55:20.505341 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-02-10 08:55:20.505357 | orchestrator | + osism apply known-hosts 2025-02-10 08:55:21.933736 | orchestrator | 2025-02-10 08:55:21 | INFO  | Task b9672e37-e79f-4639-916e-8d461247bfbd (known-hosts) was prepared for execution. 2025-02-10 08:55:24.781542 | orchestrator | 2025-02-10 08:55:21 | INFO  | It takes a moment until task b9672e37-e79f-4639-916e-8d461247bfbd (known-hosts) has been started and output is visible here. 2025-02-10 08:55:24.781892 | orchestrator | 2025-02-10 08:55:24.782180 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-02-10 08:55:24.782208 | orchestrator | 2025-02-10 08:55:24.782223 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-02-10 08:55:24.782246 | orchestrator | Monday 10 February 2025 08:55:24 +0000 (0:00:00.101) 0:00:00.101 ******* 2025-02-10 08:55:30.006836 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-10 08:55:30.007757 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-10 08:55:30.009106 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-10 08:55:30.009191 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-10 08:55:30.009206 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-10 08:55:30.009228 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-10 08:55:30.009728 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-10 08:55:30.011004 | orchestrator | 2025-02-10 08:55:30.011409 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-02-10 08:55:30.011739 | orchestrator | Monday 10 February 2025 08:55:30 +0000 (0:00:05.229) 0:00:05.331 ******* 2025-02-10 08:55:30.175632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-10 08:55:30.176232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-10 08:55:30.177194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-10 08:55:30.178205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-10 08:55:30.178604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-10 08:55:30.179371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-10 08:55:30.180633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-10 08:55:30.180988 | orchestrator | 2025-02-10 08:55:30.182076 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:30.182205 | orchestrator | Monday 10 February 2025 08:55:30 +0000 (0:00:00.169) 0:00:05.500 ******* 2025-02-10 08:55:31.236028 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTEXapfLO2qfirCJGuL7iXd/yfXNDVQ05ANywkHowiI9mWYhVhgxwhAkgdwOx4qu7MYtSF/1w7w9VoL1B+lymKPwu/weN84NnFNDbiIutYFomQpGq80RllG1X9ajb/C6lK1Lz62WvUbhVP20gRZ8fdvZM+T/Ga7eT8wnI3NP+y/qerVnp1RYtK1dBP51/ZbLxejBwC8FZI+tLD7fDbl6hMeFwbMgLQ+BjAifhJ2fObx7rGixGfOSXYUet1TfEPzLB2RUi6iWlabs+kAzvGiq27bZipVWOCkjzSkRa5u8m45prW/eUQS/g6fs5z78IVy8CVzRgmJrwzApXZ5x3GrJ3pqaFbrHdrKPKXdU9Ew6/2jMs8AsLnJgVG3DevqJRBkqsVfzUhhvCKhv67bobktqinz4AXKxDXaULtsLFikgYGhVryjFTxWkZ0MwfrtxdSakgY6/lUl5FxfVmzzcEVrBu3W+Hx/FioyRJmv66plSaKUUCPT0sKmuXtsXjma9pMev0=) 2025-02-10 08:55:31.238259 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLRCRBkM0jvBVd14zDYd3RG7Q7jVrv04yuHpqo0/UALoMXqk5BkygNPC4AViN4Z4k90lljyKkIxunfT6gcf/9E=) 2025-02-10 08:55:32.356953 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqYmxxgaP+XIlXW0pqoggwa+T/NqNaL62utjAfllMMu) 2025-02-10 08:55:32.357099 | orchestrator | 2025-02-10 08:55:32.357122 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:32.357139 | orchestrator | Monday 10 February 2025 08:55:31 +0000 (0:00:01.056) 0:00:06.556 ******* 2025-02-10 08:55:32.357174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFuE0SH5fc2v5xGbseeCL8G7xBaNGZM0tjONrSxlptq+jxR3hzdb+bo348uKu9IHwODHf+ahvVkNxoR72i9KbY=) 2025-02-10 08:55:32.360717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD2QX+Cy0y2JutyRBKeo4FwddqAEI/2IXzJYNu7uSMbxOqZ02VDFOPI7+CSB4fADsiKL2d02pbnRJe5FfAWKuX5sj7irB40RHKP9cx2mbiDluACryKaNmsqN0FBcPqL9tMcbueLYsvKZHcZZt2F7pABiPoP0qoPIJbeyj7FPaPME70sqllqCFn+SUCRZOT/FUYl9w+H7zm+4lzSkkRoFU/nNGIw7G1d1vd4jdAkrZ0q/D2uVCphovis12rKzWwoc8cyUm7L5kjAOe0XaRiGy+SsSua8XXl+rczl8UVqe1uJqgSkD8nNSEw/zV38dZ6fT4XibiCZ8AUUD5FfqK7C5rQUbDZnjJoECvzDDr7PUiTRljdVeiFGJq+6UJEFF4/7YsSe+3Kt2jOMnA8XSeqWu2reSGV6tzKXuSJptakymr5R1Z7SMyRzXFHShUvG2IHeT48rKEzppea4jk7luw7JDQTsgFitP54SHbkspJxOLSSATKtNYRNH6Lpb65pZCbLcshk=) 2025-02-10 08:55:32.360782 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPDfB2hOYsgpxwSET9RigNNlyKwh8HTwHJJQjEshLdBM) 2025-02-10 08:55:32.361315 | orchestrator | 2025-02-10 08:55:32.362251 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:32.363400 | orchestrator | Monday 10 February 2025 08:55:32 +0000 (0:00:01.121) 0:00:07.677 ******* 2025-02-10 08:55:33.477528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUfGsh22n8Pw/PUv3tu8lpgA8LLVDBSOAx/m9MkvlPpothSuJCTn2DwpGjzHgS8QNZZZRY1r96RtoREjh/GUG/DC2nFJUZ1V3ZFtTHD+M+pt5FSvpAALq4yoXr8Pk8Ad4Brz76tLf/jYTCBTZeDUA0RN2qiY1z90TnZKq9E1eTTh9NLIOGMkKum1PgNjZK+lA3bdazpG9++sp8qgg3yxkjFsBpD9b4tP22DIMp6wv43t2yS1DVCvsX6L0y1UeP3yowvRz6HV4tkxQJiBmYX4N/IyzQHd4uHM410cac+f7zdyVAJ0zyGj4Hh3QgW7Nc2SUOVhfFjd//HWm/sy7IAC4iysXP4a2HymdvzTD16kKPpDEn4Rl6uv7Qn1ZhZ7PhsEr+sNSHOmQQW4Tr3abdoH8sO3A28i35Q2c8m47OSyJLIOrNBBWF2/n2ewT8YUiPZFuUl8IVORvI8TszUzDPueZL+8NXlRlQyAtlfjGZyg4N0Xko9fxWqfZ68jB4OA9V0Uc=) 2025-02-10 08:55:33.477891 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFlgt4T282H+oy+gEk5oXSR2yTxGHJ6XX/L56dQe0pHf) 2025-02-10 08:55:33.477941 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEaW03AaRQ3kd2TMQW1067B14HM8vfqxFVioyNvqrN5E84M7QJxV+3Xl9j4vKflk+QPTaVxfBjdiCN6mcYP9tng=) 2025-02-10 08:55:33.478357 | orchestrator | 2025-02-10 08:55:33.479257 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:33.479785 | orchestrator | Monday 10 February 2025 08:55:33 +0000 (0:00:01.122) 0:00:08.800 ******* 2025-02-10 08:55:34.579323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrNdZvKZ9QdcQdq/yXpKgCAmM/qMQhlp1hCxKhnA6JsAD7K4fNgc8NaUJ0CL/TDZ12t1oD0vdf6sDKoxgrzTigwOjTjchmEsmvy/yNgdnIX+lv5POKP3PRiy8rB8WMCzUH7QumP0VR47tjTDLcmvfray98cmPS9I4wuqpoAo3qYVh9ZZ0WYFQfGrzmoAU4IlWy25ORtTAgTPS7en2w5S7A/uE6mrT9l8hd+gSRKm53Ta3ItwyAzad1NgnlEM27RXMJfEKi2lRYOTeInDGJTMKKiaPMXmUjYFhSh+Qev1sBlpSmvpirJ7L5zkaMX8I0gu/rzeVg9Ee9zNJd3HqJlH8paa0sKTbz8ZYNOsKQmYKUebmywpgD1ciolNWr9IhX3RmffbaH+zhItX8T2AtSFqI59K3c/HNbilXh3JnprK3cx0PB0ifz5BWCeTZgfBCiNhPLWB09h5XGmotQbkvr39+uTdyDh/arPoHqIoGbGCxVmaB+qAQ3qXWAfdLb/JldpjM=) 2025-02-10 08:55:34.580027 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMnNLUFhJRg7STuGziCudP79s2TxPI2I1dyXSOfzAfVx0ZYNSUhDiQv4OeduV1T9Pzihfg+CAZCeGE0g++lbVi4=) 2025-02-10 08:55:34.580086 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGM779GfOd64oKg4toXOLFBXIo3C6cIMgN2IYqr+ERXH) 2025-02-10 08:55:34.580124 | orchestrator | 2025-02-10 08:55:34.582364 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:34.582822 | orchestrator | Monday 10 February 2025 08:55:34 +0000 (0:00:01.099) 0:00:09.900 ******* 2025-02-10 08:55:35.661302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMfQP4nU76tUucz99b+mtvxb3oTRDTrTLYjdbVhT25nwkH5CXLEFXk2c4rWDXOC802PTLDGYmy82FRMGB8q962xZSXFlQz9xjXTMjErjBf4dMXX5xyv/idBEr7F60uu6EmInzohjTyGdh3Ghe1KIGd3BLWPBP5eDD1Um/AnC+J+fPSkuAlGpMYweHYQD5BlYwY35BZl3zS2NS0/p6ZWhMY+kfqPn9gkjrp7fzMewWkGmp4BpMMeRUYqwhNVn81pM0MYBGFbmPdgEO5Zl4VFBvg2GTEal5nxG03hBtGcTpQNPsroJwC6HAR1k7noiKgn4m8scIGIIK/vxN112r53+hy2hdFZeZ7T0Np2IkFjf3GYZK2sBXSps/5Q4O5fQcgVvfgZ4YkE1wtnT6K6VXEAqKB8ZPb8NR3FkEALN+tcFfn5XGWGmyUZemkJNPx1rlgXGGJSnBwbYrxwzQUqTVOFNBO9+e8LdcUryfnHZ+NWWe3hugI6sUYF03Lbphvvq/GbHU=) 2025-02-10 08:55:35.662403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoi9v6PEDycQarnCmP/rhA0HszfV1YZ6Hbn5sGBoawgl5Oo+FbjZqNTHbcXjbINDX5wSFmQj2y7Lv8d4LUbg84=) 2025-02-10 08:55:35.663157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP8W0aR0icbiOMCKnlXU5U7rDRS/b8Z37YAzJGRzF8qS) 2025-02-10 08:55:35.663807 | orchestrator | 2025-02-10 08:55:35.664147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:35.664801 | orchestrator | Monday 10 February 2025 08:55:35 +0000 (0:00:01.083) 0:00:10.984 ******* 2025-02-10 08:55:36.720065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdWSZvgHH+KpisSH87e0IvSXGH8wsKJhXcZSSt06MjYM0tQHfr542jgSXtP1eCwXWAc/GWdAC+6WMfjWWjfylQ=) 2025-02-10 08:55:36.722594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgcJddapIgLn0jTSEYQT7CyyRycLO+PUYX68qJynuJggFa9oNpUcl+3Mol1ACjjWgXWdxeYvAhGMxs3ErCkZ6lXq6VP/sDAMhc0EexMTNcQJPm/hC9BXwK41fk9AY6lFPVvoCOXU2SwnpfJVM4yZkROTJwlTDTGixY72v7OlmJ/W35a0CWQy8BrXyd1uhl+PshgC1irDMHrOTrl4zIeGCWyi70V9Ssy05We4BujyJFi+7pVcm5LpNLQpSsv67W/71Esdy8IkmYKm7tCw/CimZ7hkjK8BG1oH+jI5rmWE/7LTZo6LWgrs4Qblg4LOK6cJVNlcf7GoMEdnJmAdJrpWNhssZsBqGbW836SSlV0cNL3LBoNYwvDhVnZ8XN3O8LDLwnaYZUA+18Z0ctteeiqShNFFwUneqaPTfT2N9ZhgUwCexC1TRHmMz4Liv8ge+PCwLgKm2pSrwuNdtlrp2i9q7WdD8nejg0ugX0kUJ6/WQaHS5nekQd9YgYrfslA+bubEM=) 2025-02-10 08:55:36.722639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII9iBjDQfs7YJ+965RiIDSjWk90j3gZrn6dMe+aFB+DN) 2025-02-10 08:55:36.722941 | orchestrator | 2025-02-10 08:55:36.723889 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:36.725126 | orchestrator | Monday 10 February 2025 08:55:36 +0000 (0:00:01.059) 0:00:12.043 ******* 2025-02-10 08:55:37.786797 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyidZ6LUClGd2LdAwQ3PhLItpq0tr3OZVdaNbGh3NpWlamopX5UOd1dbS++zQoG0JxOckfmQtwAIQG32uNxN32DPxFyEAXlI0VT6LzqxV6Z9gxCVffOEHRp+yPkTw7YZb84uNgwUT/ImuxZcDqSYyC6W3YlUa8Sch3K9qi1FPuuvB26amuHJ3DhyPehnyKS+U+Xi9j+b+Fs2PvYazbdc1M4l4Oyi18XVUl0xKzRsuRhp9e0ehNi2T6FVaCMUHHWao44say9hWD/nee4L9Do5lZSOi0KEPefSQYCwkgbVkRbvgodT3HNwESeink+164Dlu97MPJ6VyaYxUvs2xVSRnOpsphBMT+BVnm3J0j6xlIZeM8s7TR53zffGFuGHUVrN0X2xLlaOvsw7q226A+RysvqOPO5PlDJENnICMFEK1YiyQUjjPemsaWrvs6rSrATwYzuTgDmxq+1t1iNZe+t4IfLa+4HUjihKjTq6iTPcyqF0vW/a6fwG/H6AJ43rbwmb8=) 2025-02-10 08:55:37.787954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDEn0F2RyGYCwKHYB+m1l85gXpckZ7dbgAcQwBo5kHdQYH0iMdmCxn/nfuOEskxoDvUURfmuTfs8xPjBeZJqU0E=) 2025-02-10 08:55:37.788025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtcxj5Fbh/vlqD1fl4xkTcLacYuwif3PlUMsSk5t1vv) 2025-02-10 08:55:37.788059 | orchestrator | 2025-02-10 08:55:37.788328 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-02-10 08:55:37.788711 | orchestrator | Monday 10 February 2025 08:55:37 +0000 (0:00:01.065) 0:00:13.108 ******* 2025-02-10 08:55:42.998316 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-10 08:55:42.999245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-10 08:55:42.999276 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-10 08:55:43.000385 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-10 08:55:43.001170 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-10 08:55:43.002954 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-10 08:55:43.004163 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-10 08:55:43.005902 | orchestrator | 2025-02-10 08:55:43.006896 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-02-10 08:55:43.006952 | orchestrator | Monday 10 February 2025 08:55:42 +0000 (0:00:05.211) 0:00:18.320 ******* 2025-02-10 08:55:43.182615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-10 08:55:43.183288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-10 08:55:43.184281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-10 08:55:43.185174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-10 08:55:43.185836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-10 08:55:43.187156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-10 08:55:43.187224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-10 08:55:43.187839 | orchestrator | 2025-02-10 08:55:43.188311 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:43.188791 | orchestrator | Monday 10 February 2025 08:55:43 +0000 (0:00:00.186) 0:00:18.506 ******* 2025-02-10 08:55:44.276404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTEXapfLO2qfirCJGuL7iXd/yfXNDVQ05ANywkHowiI9mWYhVhgxwhAkgdwOx4qu7MYtSF/1w7w9VoL1B+lymKPwu/weN84NnFNDbiIutYFomQpGq80RllG1X9ajb/C6lK1Lz62WvUbhVP20gRZ8fdvZM+T/Ga7eT8wnI3NP+y/qerVnp1RYtK1dBP51/ZbLxejBwC8FZI+tLD7fDbl6hMeFwbMgLQ+BjAifhJ2fObx7rGixGfOSXYUet1TfEPzLB2RUi6iWlabs+kAzvGiq27bZipVWOCkjzSkRa5u8m45prW/eUQS/g6fs5z78IVy8CVzRgmJrwzApXZ5x3GrJ3pqaFbrHdrKPKXdU9Ew6/2jMs8AsLnJgVG3DevqJRBkqsVfzUhhvCKhv67bobktqinz4AXKxDXaULtsLFikgYGhVryjFTxWkZ0MwfrtxdSakgY6/lUl5FxfVmzzcEVrBu3W+Hx/FioyRJmv66plSaKUUCPT0sKmuXtsXjma9pMev0=) 2025-02-10 08:55:44.277876 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLRCRBkM0jvBVd14zDYd3RG7Q7jVrv04yuHpqo0/UALoMXqk5BkygNPC4AViN4Z4k90lljyKkIxunfT6gcf/9E=) 2025-02-10 08:55:44.278248 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqYmxxgaP+XIlXW0pqoggwa+T/NqNaL62utjAfllMMu) 2025-02-10 08:55:44.278282 | orchestrator | 2025-02-10 08:55:44.278304 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:44.278581 | orchestrator | Monday 10 February 2025 08:55:44 +0000 (0:00:01.093) 0:00:19.599 ******* 2025-02-10 08:55:45.351557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD2QX+Cy0y2JutyRBKeo4FwddqAEI/2IXzJYNu7uSMbxOqZ02VDFOPI7+CSB4fADsiKL2d02pbnRJe5FfAWKuX5sj7irB40RHKP9cx2mbiDluACryKaNmsqN0FBcPqL9tMcbueLYsvKZHcZZt2F7pABiPoP0qoPIJbeyj7FPaPME70sqllqCFn+SUCRZOT/FUYl9w+H7zm+4lzSkkRoFU/nNGIw7G1d1vd4jdAkrZ0q/D2uVCphovis12rKzWwoc8cyUm7L5kjAOe0XaRiGy+SsSua8XXl+rczl8UVqe1uJqgSkD8nNSEw/zV38dZ6fT4XibiCZ8AUUD5FfqK7C5rQUbDZnjJoECvzDDr7PUiTRljdVeiFGJq+6UJEFF4/7YsSe+3Kt2jOMnA8XSeqWu2reSGV6tzKXuSJptakymr5R1Z7SMyRzXFHShUvG2IHeT48rKEzppea4jk7luw7JDQTsgFitP54SHbkspJxOLSSATKtNYRNH6Lpb65pZCbLcshk=) 2025-02-10 08:55:45.352320 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFuE0SH5fc2v5xGbseeCL8G7xBaNGZM0tjONrSxlptq+jxR3hzdb+bo348uKu9IHwODHf+ahvVkNxoR72i9KbY=) 2025-02-10 08:55:45.352664 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPDfB2hOYsgpxwSET9RigNNlyKwh8HTwHJJQjEshLdBM) 2025-02-10 08:55:45.353591 | orchestrator | 2025-02-10 08:55:45.354276 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:45.354921 | orchestrator | Monday 10 February 2025 08:55:45 +0000 (0:00:01.073) 0:00:20.673 ******* 2025-02-10 08:55:46.453278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUfGsh22n8Pw/PUv3tu8lpgA8LLVDBSOAx/m9MkvlPpothSuJCTn2DwpGjzHgS8QNZZZRY1r96RtoREjh/GUG/DC2nFJUZ1V3ZFtTHD+M+pt5FSvpAALq4yoXr8Pk8Ad4Brz76tLf/jYTCBTZeDUA0RN2qiY1z90TnZKq9E1eTTh9NLIOGMkKum1PgNjZK+lA3bdazpG9++sp8qgg3yxkjFsBpD9b4tP22DIMp6wv43t2yS1DVCvsX6L0y1UeP3yowvRz6HV4tkxQJiBmYX4N/IyzQHd4uHM410cac+f7zdyVAJ0zyGj4Hh3QgW7Nc2SUOVhfFjd//HWm/sy7IAC4iysXP4a2HymdvzTD16kKPpDEn4Rl6uv7Qn1ZhZ7PhsEr+sNSHOmQQW4Tr3abdoH8sO3A28i35Q2c8m47OSyJLIOrNBBWF2/n2ewT8YUiPZFuUl8IVORvI8TszUzDPueZL+8NXlRlQyAtlfjGZyg4N0Xko9fxWqfZ68jB4OA9V0Uc=) 2025-02-10 08:55:46.453701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEaW03AaRQ3kd2TMQW1067B14HM8vfqxFVioyNvqrN5E84M7QJxV+3Xl9j4vKflk+QPTaVxfBjdiCN6mcYP9tng=) 2025-02-10 08:55:46.453958 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFlgt4T282H+oy+gEk5oXSR2yTxGHJ6XX/L56dQe0pHf) 2025-02-10 08:55:46.454572 | orchestrator | 2025-02-10 08:55:46.455253 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:46.455905 | orchestrator | Monday 10 February 2025 08:55:46 +0000 (0:00:01.103) 0:00:21.777 ******* 2025-02-10 08:55:47.534095 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrNdZvKZ9QdcQdq/yXpKgCAmM/qMQhlp1hCxKhnA6JsAD7K4fNgc8NaUJ0CL/TDZ12t1oD0vdf6sDKoxgrzTigwOjTjchmEsmvy/yNgdnIX+lv5POKP3PRiy8rB8WMCzUH7QumP0VR47tjTDLcmvfray98cmPS9I4wuqpoAo3qYVh9ZZ0WYFQfGrzmoAU4IlWy25ORtTAgTPS7en2w5S7A/uE6mrT9l8hd+gSRKm53Ta3ItwyAzad1NgnlEM27RXMJfEKi2lRYOTeInDGJTMKKiaPMXmUjYFhSh+Qev1sBlpSmvpirJ7L5zkaMX8I0gu/rzeVg9Ee9zNJd3HqJlH8paa0sKTbz8ZYNOsKQmYKUebmywpgD1ciolNWr9IhX3RmffbaH+zhItX8T2AtSFqI59K3c/HNbilXh3JnprK3cx0PB0ifz5BWCeTZgfBCiNhPLWB09h5XGmotQbkvr39+uTdyDh/arPoHqIoGbGCxVmaB+qAQ3qXWAfdLb/JldpjM=) 2025-02-10 08:55:47.534352 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMnNLUFhJRg7STuGziCudP79s2TxPI2I1dyXSOfzAfVx0ZYNSUhDiQv4OeduV1T9Pzihfg+CAZCeGE0g++lbVi4=) 2025-02-10 08:55:47.535901 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGM779GfOd64oKg4toXOLFBXIo3C6cIMgN2IYqr+ERXH) 2025-02-10 08:55:47.536299 | orchestrator | 2025-02-10 08:55:47.536786 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:47.537303 | orchestrator | Monday 10 February 2025 08:55:47 +0000 (0:00:01.080) 0:00:22.857 ******* 2025-02-10 08:55:48.667239 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMfQP4nU76tUucz99b+mtvxb3oTRDTrTLYjdbVhT25nwkH5CXLEFXk2c4rWDXOC802PTLDGYmy82FRMGB8q962xZSXFlQz9xjXTMjErjBf4dMXX5xyv/idBEr7F60uu6EmInzohjTyGdh3Ghe1KIGd3BLWPBP5eDD1Um/AnC+J+fPSkuAlGpMYweHYQD5BlYwY35BZl3zS2NS0/p6ZWhMY+kfqPn9gkjrp7fzMewWkGmp4BpMMeRUYqwhNVn81pM0MYBGFbmPdgEO5Zl4VFBvg2GTEal5nxG03hBtGcTpQNPsroJwC6HAR1k7noiKgn4m8scIGIIK/vxN112r53+hy2hdFZeZ7T0Np2IkFjf3GYZK2sBXSps/5Q4O5fQcgVvfgZ4YkE1wtnT6K6VXEAqKB8ZPb8NR3FkEALN+tcFfn5XGWGmyUZemkJNPx1rlgXGGJSnBwbYrxwzQUqTVOFNBO9+e8LdcUryfnHZ+NWWe3hugI6sUYF03Lbphvvq/GbHU=) 2025-02-10 08:55:49.726289 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoi9v6PEDycQarnCmP/rhA0HszfV1YZ6Hbn5sGBoawgl5Oo+FbjZqNTHbcXjbINDX5wSFmQj2y7Lv8d4LUbg84=) 2025-02-10 08:55:49.726435 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP8W0aR0icbiOMCKnlXU5U7rDRS/b8Z37YAzJGRzF8qS) 2025-02-10 08:55:49.726456 | orchestrator | 2025-02-10 08:55:49.726523 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:49.726540 | orchestrator | Monday 10 February 2025 08:55:48 +0000 (0:00:01.128) 0:00:23.986 ******* 2025-02-10 08:55:49.726601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgcJddapIgLn0jTSEYQT7CyyRycLO+PUYX68qJynuJggFa9oNpUcl+3Mol1ACjjWgXWdxeYvAhGMxs3ErCkZ6lXq6VP/sDAMhc0EexMTNcQJPm/hC9BXwK41fk9AY6lFPVvoCOXU2SwnpfJVM4yZkROTJwlTDTGixY72v7OlmJ/W35a0CWQy8BrXyd1uhl+PshgC1irDMHrOTrl4zIeGCWyi70V9Ssy05We4BujyJFi+7pVcm5LpNLQpSsv67W/71Esdy8IkmYKm7tCw/CimZ7hkjK8BG1oH+jI5rmWE/7LTZo6LWgrs4Qblg4LOK6cJVNlcf7GoMEdnJmAdJrpWNhssZsBqGbW836SSlV0cNL3LBoNYwvDhVnZ8XN3O8LDLwnaYZUA+18Z0ctteeiqShNFFwUneqaPTfT2N9ZhgUwCexC1TRHmMz4Liv8ge+PCwLgKm2pSrwuNdtlrp2i9q7WdD8nejg0ugX0kUJ6/WQaHS5nekQd9YgYrfslA+bubEM=) 2025-02-10 08:55:49.726996 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdWSZvgHH+KpisSH87e0IvSXGH8wsKJhXcZSSt06MjYM0tQHfr542jgSXtP1eCwXWAc/GWdAC+6WMfjWWjfylQ=) 2025-02-10 08:55:49.727033 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII9iBjDQfs7YJ+965RiIDSjWk90j3gZrn6dMe+aFB+DN) 2025-02-10 08:55:49.727577 | orchestrator | 2025-02-10 08:55:49.728113 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-10 08:55:49.728692 | orchestrator | Monday 10 February 2025 08:55:49 +0000 (0:00:01.061) 0:00:25.048 ******* 2025-02-10 08:55:50.806416 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyidZ6LUClGd2LdAwQ3PhLItpq0tr3OZVdaNbGh3NpWlamopX5UOd1dbS++zQoG0JxOckfmQtwAIQG32uNxN32DPxFyEAXlI0VT6LzqxV6Z9gxCVffOEHRp+yPkTw7YZb84uNgwUT/ImuxZcDqSYyC6W3YlUa8Sch3K9qi1FPuuvB26amuHJ3DhyPehnyKS+U+Xi9j+b+Fs2PvYazbdc1M4l4Oyi18XVUl0xKzRsuRhp9e0ehNi2T6FVaCMUHHWao44say9hWD/nee4L9Do5lZSOi0KEPefSQYCwkgbVkRbvgodT3HNwESeink+164Dlu97MPJ6VyaYxUvs2xVSRnOpsphBMT+BVnm3J0j6xlIZeM8s7TR53zffGFuGHUVrN0X2xLlaOvsw7q226A+RysvqOPO5PlDJENnICMFEK1YiyQUjjPemsaWrvs6rSrATwYzuTgDmxq+1t1iNZe+t4IfLa+4HUjihKjTq6iTPcyqF0vW/a6fwG/H6AJ43rbwmb8=) 2025-02-10 08:55:50.807830 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDEn0F2RyGYCwKHYB+m1l85gXpckZ7dbgAcQwBo5kHdQYH0iMdmCxn/nfuOEskxoDvUURfmuTfs8xPjBeZJqU0E=) 2025-02-10 08:55:50.808949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtcxj5Fbh/vlqD1fl4xkTcLacYuwif3PlUMsSk5t1vv) 2025-02-10 08:55:50.808998 | orchestrator | 2025-02-10 08:55:50.809033 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-02-10 08:55:50.809693 | orchestrator | Monday 10 February 2025 08:55:50 +0000 (0:00:01.081) 0:00:26.129 ******* 2025-02-10 08:55:50.992112 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-10 08:55:50.994213 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-10 08:55:50.995749 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-10 08:55:50.996674 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-10 08:55:50.997927 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-10 08:55:50.998768 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-10 08:55:50.999615 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-10 08:55:51.000622 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:51.001316 | orchestrator | 2025-02-10 08:55:51.001983 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-02-10 08:55:51.002644 | orchestrator | Monday 10 February 2025 08:55:50 +0000 (0:00:00.185) 0:00:26.314 ******* 2025-02-10 08:55:51.063215 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:51.111276 | orchestrator | 2025-02-10 08:55:51.111503 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-02-10 08:55:51.111523 | orchestrator | Monday 10 February 2025 08:55:51 +0000 (0:00:00.071) 0:00:26.385 ******* 2025-02-10 08:55:51.111548 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:55:51.112139 | orchestrator | 2025-02-10 08:55:51.112159 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-02-10 08:55:51.112847 | orchestrator | Monday 10 February 2025 08:55:51 +0000 (0:00:00.050) 0:00:26.435 ******* 2025-02-10 08:55:51.791176 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:51.792325 | orchestrator | 2025-02-10 08:55:51.792356 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:55:51.792544 | orchestrator | 2025-02-10 08:55:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:55:51.792897 | orchestrator | 2025-02-10 08:55:51 | INFO  | Please wait and do not abort execution. 2025-02-10 08:55:51.792916 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 08:55:51.793683 | orchestrator | 2025-02-10 08:55:51.793895 | orchestrator | 2025-02-10 08:55:51.794544 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 08:55:51.795269 | orchestrator | Monday 10 February 2025 08:55:51 +0000 (0:00:00.678) 0:00:27.113 ******* 2025-02-10 08:55:51.796061 | orchestrator | =============================================================================== 2025-02-10 08:55:51.796643 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.23s 2025-02-10 08:55:51.797023 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2025-02-10 08:55:51.797971 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-02-10 08:55:51.798619 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-02-10 08:55:51.798639 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-02-10 08:55:51.798936 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:55:51.799104 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-10 08:55:51.799523 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-02-10 08:55:51.799931 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-02-10 08:55:51.800774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-02-10 08:55:51.801535 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-02-10 08:55:51.802201 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-02-10 08:55:51.802395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-02-10 08:55:51.802856 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-02-10 08:55:51.803321 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-02-10 08:55:51.803623 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-02-10 08:55:51.803916 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.68s 2025-02-10 08:55:51.804421 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-02-10 08:55:51.804603 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2025-02-10 08:55:51.804984 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-02-10 08:55:52.243189 | orchestrator | ++ semver latest 7.0.0 2025-02-10 08:55:52.289699 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-10 08:55:53.777967 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 08:55:53.778175 | orchestrator | + osism apply nexus 2025-02-10 08:55:53.778219 | orchestrator | 2025-02-10 08:55:53 | INFO  | Task 0adb4080-34e2-4115-8758-bf679ebe2c19 (nexus) was prepared for execution. 2025-02-10 08:55:56.900942 | orchestrator | 2025-02-10 08:55:53 | INFO  | It takes a moment until task 0adb4080-34e2-4115-8758-bf679ebe2c19 (nexus) has been started and output is visible here. 2025-02-10 08:55:56.901091 | orchestrator | 2025-02-10 08:55:56.902177 | orchestrator | PLAY [Apply role nexus] ******************************************************** 2025-02-10 08:55:56.905004 | orchestrator | 2025-02-10 08:55:56.906112 | orchestrator | TASK [osism.services.nexus : Include config tasks] ***************************** 2025-02-10 08:55:56.906196 | orchestrator | Monday 10 February 2025 08:55:56 +0000 (0:00:00.144) 0:00:00.144 ******* 2025-02-10 08:55:57.008235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/config.yml for testbed-manager 2025-02-10 08:55:57.008686 | orchestrator | 2025-02-10 08:55:57.009836 | orchestrator | TASK [osism.services.nexus : Create required directories] ********************** 2025-02-10 08:55:57.010203 | orchestrator | Monday 10 February 2025 08:55:57 +0000 (0:00:00.108) 0:00:00.253 ******* 2025-02-10 08:55:57.856384 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus) 2025-02-10 08:55:57.856971 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus/configuration) 2025-02-10 08:55:57.857952 | orchestrator | 2025-02-10 08:55:57.859299 | orchestrator | TASK [osism.services.nexus : Set UID for nexus_configuration_directory] ******** 2025-02-10 08:55:57.859981 | orchestrator | Monday 10 February 2025 08:55:57 +0000 (0:00:00.849) 0:00:01.103 ******* 2025-02-10 08:55:58.231484 | orchestrator | changed: [testbed-manager] 2025-02-10 08:55:58.232171 | orchestrator | 2025-02-10 08:55:58.232208 | orchestrator | TASK [osism.services.nexus : Copy configuration files] ************************* 2025-02-10 08:55:58.232756 | orchestrator | Monday 10 February 2025 08:55:58 +0000 (0:00:00.372) 0:00:01.476 ******* 2025-02-10 08:56:00.143511 | orchestrator | changed: [testbed-manager] => (item=nexus.properties) 2025-02-10 08:56:00.144770 | orchestrator | changed: [testbed-manager] => (item=nexus.env) 2025-02-10 08:56:00.144861 | orchestrator | 2025-02-10 08:56:00.245024 | orchestrator | TASK [osism.services.nexus : Include service tasks] **************************** 2025-02-10 08:56:00.245108 | orchestrator | Monday 10 February 2025 08:56:00 +0000 (0:00:01.912) 0:00:03.388 ******* 2025-02-10 08:56:00.245143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/service.yml for testbed-manager 2025-02-10 08:56:00.247819 | orchestrator | 2025-02-10 08:56:00.249561 | orchestrator | TASK [osism.services.nexus : Copy nexus systemd unit file] ********************* 2025-02-10 08:56:01.119002 | orchestrator | Monday 10 February 2025 08:56:00 +0000 (0:00:00.104) 0:00:03.493 ******* 2025-02-10 08:56:01.119184 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:01.119383 | orchestrator | 2025-02-10 08:56:01.120631 | orchestrator | TASK [osism.services.nexus : Create traefik external network] ****************** 2025-02-10 08:56:01.120991 | orchestrator | Monday 10 February 2025 08:56:01 +0000 (0:00:00.872) 0:00:04.365 ******* 2025-02-10 08:56:01.945906 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:01.946168 | orchestrator | 2025-02-10 08:56:01.947436 | orchestrator | TASK [osism.services.nexus : Copy docker-compose.yml file] ********************* 2025-02-10 08:56:01.951695 | orchestrator | Monday 10 February 2025 08:56:01 +0000 (0:00:00.825) 0:00:05.190 ******* 2025-02-10 08:56:02.951170 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:02.951433 | orchestrator | 2025-02-10 08:56:02.951537 | orchestrator | TASK [osism.services.nexus : Stop and disable old service docker-compose@nexus] *** 2025-02-10 08:56:02.951563 | orchestrator | Monday 10 February 2025 08:56:02 +0000 (0:00:01.005) 0:00:06.196 ******* 2025-02-10 08:56:03.931282 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:03.931803 | orchestrator | 2025-02-10 08:56:03.931849 | orchestrator | TASK [osism.services.nexus : Manage nexus service] ***************************** 2025-02-10 08:56:03.932271 | orchestrator | Monday 10 February 2025 08:56:03 +0000 (0:00:00.979) 0:00:07.175 ******* 2025-02-10 08:56:05.426804 | orchestrator | changed: [testbed-manager] 2025-02-10 08:56:05.429000 | orchestrator | 2025-02-10 08:56:05.429050 | orchestrator | TASK [osism.services.nexus : Register that nexus service was started] ********** 2025-02-10 08:56:05.429078 | orchestrator | Monday 10 February 2025 08:56:05 +0000 (0:00:01.494) 0:00:08.670 ******* 2025-02-10 08:56:05.525451 | orchestrator | ok: [testbed-manager] 2025-02-10 08:56:05.526564 | orchestrator | 2025-02-10 08:56:05.526608 | orchestrator | TASK [osism.services.nexus : Flush handlers] *********************************** 2025-02-10 08:56:05.526957 | orchestrator | Monday 10 February 2025 08:56:05 +0000 (0:00:00.074) 0:00:08.744 ******* 2025-02-10 08:56:05.527644 | orchestrator | 2025-02-10 08:56:05.528257 | orchestrator | RUNNING HANDLER [osism.services.nexus : Restart nexus service] ***************** 2025-02-10 08:56:05.528975 | orchestrator | Monday 10 February 2025 08:56:05 +0000 (0:00:00.026) 0:00:08.771 ******* 2025-02-10 08:56:05.604854 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:05.698679 | orchestrator | 2025-02-10 08:57:05.698864 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for nexus service to start] ******* 2025-02-10 08:57:05.698896 | orchestrator | Monday 10 February 2025 08:56:05 +0000 (0:00:00.080) 0:00:08.852 ******* 2025-02-10 08:57:05.698930 | orchestrator | Pausing for 60 seconds 2025-02-10 08:57:05.699588 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:05.700782 | orchestrator | 2025-02-10 08:57:05.700811 | orchestrator | RUNNING HANDLER [osism.services.nexus : Ensure that all containers are up] ***** 2025-02-10 08:57:05.700829 | orchestrator | Monday 10 February 2025 08:57:05 +0000 (0:01:00.087) 0:01:08.939 ******* 2025-02-10 08:57:06.362350 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:06.365348 | orchestrator | 2025-02-10 08:57:06.365406 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for an healthy nexus service] ***** 2025-02-10 08:57:06.365655 | orchestrator | Monday 10 February 2025 08:57:06 +0000 (0:00:00.669) 0:01:09.609 ******* 2025-02-10 08:57:27.374333 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy nexus service (50 retries left). 2025-02-10 08:57:27.375619 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:27.375676 | orchestrator | 2025-02-10 08:57:27.375710 | orchestrator | TASK [osism.services.nexus : Include initialize tasks] ************************* 2025-02-10 08:57:27.376428 | orchestrator | Monday 10 February 2025 08:57:27 +0000 (0:00:21.009) 0:01:30.619 ******* 2025-02-10 08:57:27.456630 | orchestrator | [WARNING]: Found variable using reserved name: args 2025-02-10 08:57:27.496657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/initialize.yml for testbed-manager 2025-02-10 08:57:27.498748 | orchestrator | 2025-02-10 08:57:27.498871 | orchestrator | TASK [osism.services.nexus : Get setup admin password] ************************* 2025-02-10 08:57:28.614853 | orchestrator | Monday 10 February 2025 08:57:27 +0000 (0:00:00.124) 0:01:30.743 ******* 2025-02-10 08:57:28.615058 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:28.615248 | orchestrator | 2025-02-10 08:57:28.615276 | orchestrator | TASK [osism.services.nexus : Set setup admin password] ************************* 2025-02-10 08:57:28.615296 | orchestrator | Monday 10 February 2025 08:57:28 +0000 (0:00:01.115) 0:01:31.859 ******* 2025-02-10 08:57:28.680227 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:28.681385 | orchestrator | 2025-02-10 08:57:28.682149 | orchestrator | TASK [osism.services.nexus : Provision scripts included in the container image] *** 2025-02-10 08:57:28.683241 | orchestrator | Monday 10 February 2025 08:57:28 +0000 (0:00:00.068) 0:01:31.927 ******* 2025-02-10 08:57:32.297952 | orchestrator | changed: [testbed-manager] => (item=anonymous.json) 2025-02-10 08:57:32.298627 | orchestrator | changed: [testbed-manager] => (item=cleanup.json) 2025-02-10 08:57:32.298663 | orchestrator | 2025-02-10 08:57:32.298675 | orchestrator | TASK [osism.services.nexus : Provision scripts included in this ansible role] *** 2025-02-10 08:57:32.298694 | orchestrator | Monday 10 February 2025 08:57:32 +0000 (0:00:03.614) 0:01:35.542 ******* 2025-02-10 08:57:32.560940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=create_repos_from_list) 2025-02-10 08:57:32.562822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_http_proxy) 2025-02-10 08:57:32.563235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_realms) 2025-02-10 08:57:32.563770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=update_admin_password) 2025-02-10 08:57:32.564651 | orchestrator | 2025-02-10 08:57:32.565695 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:32.567068 | orchestrator | Monday 10 February 2025 08:57:32 +0000 (0:00:00.264) 0:01:35.806 ******* 2025-02-10 08:57:32.653239 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:32.654244 | orchestrator | 2025-02-10 08:57:32.654879 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:32.655928 | orchestrator | Monday 10 February 2025 08:57:32 +0000 (0:00:00.092) 0:01:35.899 ******* 2025-02-10 08:57:32.725222 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:32.725556 | orchestrator | 2025-02-10 08:57:32.726089 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:32.727082 | orchestrator | Monday 10 February 2025 08:57:32 +0000 (0:00:00.072) 0:01:35.972 ******* 2025-02-10 08:57:33.667745 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:33.668814 | orchestrator | 2025-02-10 08:57:33.668865 | orchestrator | TASK [osism.services.nexus : Deleting script create_repos_from_list] *********** 2025-02-10 08:57:34.355130 | orchestrator | Monday 10 February 2025 08:57:33 +0000 (0:00:00.939) 0:01:36.911 ******* 2025-02-10 08:57:34.355361 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:34.355743 | orchestrator | 2025-02-10 08:57:34.355773 | orchestrator | TASK [osism.services.nexus : Declaring script create_repos_from_list] ********** 2025-02-10 08:57:34.355797 | orchestrator | Monday 10 February 2025 08:57:34 +0000 (0:00:00.687) 0:01:37.598 ******* 2025-02-10 08:57:35.026507 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:35.027177 | orchestrator | 2025-02-10 08:57:35.027250 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:35.027286 | orchestrator | Monday 10 February 2025 08:57:35 +0000 (0:00:00.670) 0:01:38.269 ******* 2025-02-10 08:57:35.121984 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:35.122545 | orchestrator | 2025-02-10 08:57:35.123475 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:35.123530 | orchestrator | Monday 10 February 2025 08:57:35 +0000 (0:00:00.097) 0:01:38.366 ******* 2025-02-10 08:57:35.185669 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:35.188844 | orchestrator | 2025-02-10 08:57:35.869768 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:35.870075 | orchestrator | Monday 10 February 2025 08:57:35 +0000 (0:00:00.066) 0:01:38.433 ******* 2025-02-10 08:57:35.870128 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:35.870977 | orchestrator | 2025-02-10 08:57:35.871020 | orchestrator | TASK [osism.services.nexus : Deleting script setup_http_proxy] ***************** 2025-02-10 08:57:35.871333 | orchestrator | Monday 10 February 2025 08:57:35 +0000 (0:00:00.682) 0:01:39.116 ******* 2025-02-10 08:57:36.572084 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:36.574413 | orchestrator | 2025-02-10 08:57:36.574541 | orchestrator | TASK [osism.services.nexus : Declaring script setup_http_proxy] **************** 2025-02-10 08:57:37.232582 | orchestrator | Monday 10 February 2025 08:57:36 +0000 (0:00:00.702) 0:01:39.818 ******* 2025-02-10 08:57:37.232830 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:37.235121 | orchestrator | 2025-02-10 08:57:37.235188 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:37.235408 | orchestrator | Monday 10 February 2025 08:57:37 +0000 (0:00:00.657) 0:01:40.475 ******* 2025-02-10 08:57:37.300574 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:37.300795 | orchestrator | 2025-02-10 08:57:37.303309 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:37.363014 | orchestrator | Monday 10 February 2025 08:57:37 +0000 (0:00:00.072) 0:01:40.547 ******* 2025-02-10 08:57:37.363203 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:37.365578 | orchestrator | 2025-02-10 08:57:37.366117 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:37.366941 | orchestrator | Monday 10 February 2025 08:57:37 +0000 (0:00:00.063) 0:01:40.611 ******* 2025-02-10 08:57:37.983351 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:37.988042 | orchestrator | 2025-02-10 08:57:37.988084 | orchestrator | TASK [osism.services.nexus : Deleting script setup_realms] ********************* 2025-02-10 08:57:37.988144 | orchestrator | Monday 10 February 2025 08:57:37 +0000 (0:00:00.617) 0:01:41.228 ******* 2025-02-10 08:57:38.659920 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:38.661445 | orchestrator | 2025-02-10 08:57:38.662807 | orchestrator | TASK [osism.services.nexus : Declaring script setup_realms] ******************** 2025-02-10 08:57:38.662855 | orchestrator | Monday 10 February 2025 08:57:38 +0000 (0:00:00.677) 0:01:41.906 ******* 2025-02-10 08:57:39.349310 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:39.349855 | orchestrator | 2025-02-10 08:57:39.350164 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:39.350771 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:00.687) 0:01:42.593 ******* 2025-02-10 08:57:39.434411 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:39.435079 | orchestrator | 2025-02-10 08:57:39.435494 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:39.435970 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:00.088) 0:01:42.682 ******* 2025-02-10 08:57:39.513298 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:39.513782 | orchestrator | 2025-02-10 08:57:39.514218 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:39.515207 | orchestrator | Monday 10 February 2025 08:57:39 +0000 (0:00:00.077) 0:01:42.759 ******* 2025-02-10 08:57:40.175653 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:40.176265 | orchestrator | 2025-02-10 08:57:40.176756 | orchestrator | TASK [osism.services.nexus : Deleting script update_admin_password] ************ 2025-02-10 08:57:40.177712 | orchestrator | Monday 10 February 2025 08:57:40 +0000 (0:00:00.661) 0:01:43.421 ******* 2025-02-10 08:57:40.825252 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:40.827611 | orchestrator | 2025-02-10 08:57:40.827663 | orchestrator | TASK [osism.services.nexus : Declaring script update_admin_password] *********** 2025-02-10 08:57:40.828951 | orchestrator | Monday 10 February 2025 08:57:40 +0000 (0:00:00.651) 0:01:44.072 ******* 2025-02-10 08:57:41.546857 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:41.547644 | orchestrator | 2025-02-10 08:57:41.547868 | orchestrator | TASK [osism.services.nexus : Set admin password] ******************************* 2025-02-10 08:57:41.547893 | orchestrator | Monday 10 February 2025 08:57:41 +0000 (0:00:00.719) 0:01:44.792 ******* 2025-02-10 08:57:41.648164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:57:41.648369 | orchestrator | 2025-02-10 08:57:41.649431 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:41.649528 | orchestrator | Monday 10 February 2025 08:57:41 +0000 (0:00:00.100) 0:01:44.892 ******* 2025-02-10 08:57:41.723483 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:41.723878 | orchestrator | 2025-02-10 08:57:41.723906 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:41.724139 | orchestrator | Monday 10 February 2025 08:57:41 +0000 (0:00:00.078) 0:01:44.971 ******* 2025-02-10 08:57:41.793922 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:41.795619 | orchestrator | 2025-02-10 08:57:41.795679 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:41.795982 | orchestrator | Monday 10 February 2025 08:57:41 +0000 (0:00:00.067) 0:01:45.038 ******* 2025-02-10 08:57:42.448648 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:42.449727 | orchestrator | 2025-02-10 08:57:42.450370 | orchestrator | TASK [osism.services.nexus : Calling script update_admin_password] ************* 2025-02-10 08:57:42.451153 | orchestrator | Monday 10 February 2025 08:57:42 +0000 (0:00:00.655) 0:01:45.694 ******* 2025-02-10 08:57:44.460084 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:44.521556 | orchestrator | 2025-02-10 08:57:44.521712 | orchestrator | TASK [osism.services.nexus : Set new admin password] *************************** 2025-02-10 08:57:44.521737 | orchestrator | Monday 10 February 2025 08:57:44 +0000 (0:00:02.011) 0:01:47.705 ******* 2025-02-10 08:57:44.521768 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:44.521819 | orchestrator | 2025-02-10 08:57:44.521857 | orchestrator | TASK [osism.services.nexus : Allow anonymous access] *************************** 2025-02-10 08:57:44.521870 | orchestrator | Monday 10 February 2025 08:57:44 +0000 (0:00:00.063) 0:01:47.769 ******* 2025-02-10 08:57:46.477591 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:46.479011 | orchestrator | 2025-02-10 08:57:46.479080 | orchestrator | TASK [osism.services.nexus : Cleanup default repositories] ********************* 2025-02-10 08:57:46.479120 | orchestrator | Monday 10 February 2025 08:57:46 +0000 (0:00:01.953) 0:01:49.723 ******* 2025-02-10 08:57:48.399610 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:48.399888 | orchestrator | 2025-02-10 08:57:48.399923 | orchestrator | TASK [osism.services.nexus : Setup http proxy] ********************************* 2025-02-10 08:57:48.400546 | orchestrator | Monday 10 February 2025 08:57:48 +0000 (0:00:01.919) 0:01:51.643 ******* 2025-02-10 08:57:48.503439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:57:48.503774 | orchestrator | 2025-02-10 08:57:48.504168 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:48.504869 | orchestrator | Monday 10 February 2025 08:57:48 +0000 (0:00:00.108) 0:01:51.751 ******* 2025-02-10 08:57:48.596680 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:48.599523 | orchestrator | 2025-02-10 08:57:48.599577 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:48.599604 | orchestrator | Monday 10 February 2025 08:57:48 +0000 (0:00:00.091) 0:01:51.843 ******* 2025-02-10 08:57:48.668914 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:48.669602 | orchestrator | 2025-02-10 08:57:48.673291 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:48.673664 | orchestrator | Monday 10 February 2025 08:57:48 +0000 (0:00:00.072) 0:01:51.916 ******* 2025-02-10 08:57:49.345591 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:49.345855 | orchestrator | 2025-02-10 08:57:49.346642 | orchestrator | TASK [osism.services.nexus : Calling script setup_http_proxy] ****************** 2025-02-10 08:57:49.347127 | orchestrator | Monday 10 February 2025 08:57:49 +0000 (0:00:00.675) 0:01:52.591 ******* 2025-02-10 08:57:50.421510 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:50.423832 | orchestrator | 2025-02-10 08:57:50.649351 | orchestrator | TASK [osism.services.nexus : Setup realms] ************************************* 2025-02-10 08:57:50.649541 | orchestrator | Monday 10 February 2025 08:57:50 +0000 (0:00:01.075) 0:01:53.667 ******* 2025-02-10 08:57:50.649579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:57:50.652316 | orchestrator | 2025-02-10 08:57:50.653522 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:50.654160 | orchestrator | Monday 10 February 2025 08:57:50 +0000 (0:00:00.224) 0:01:53.892 ******* 2025-02-10 08:57:50.727774 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:50.790214 | orchestrator | 2025-02-10 08:57:50.790359 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:50.790379 | orchestrator | Monday 10 February 2025 08:57:50 +0000 (0:00:00.081) 0:01:53.973 ******* 2025-02-10 08:57:50.790412 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:50.791982 | orchestrator | 2025-02-10 08:57:50.792066 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:51.384781 | orchestrator | Monday 10 February 2025 08:57:50 +0000 (0:00:00.062) 0:01:54.036 ******* 2025-02-10 08:57:51.384917 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:51.386150 | orchestrator | 2025-02-10 08:57:51.387147 | orchestrator | TASK [osism.services.nexus : Calling script setup_realms] ********************** 2025-02-10 08:57:51.387866 | orchestrator | Monday 10 February 2025 08:57:51 +0000 (0:00:00.595) 0:01:54.631 ******* 2025-02-10 08:57:52.451623 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:52.452784 | orchestrator | 2025-02-10 08:57:52.452840 | orchestrator | TASK [osism.services.nexus : Apply defaults to docker proxy repos] ************* 2025-02-10 08:57:52.459244 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:01.063) 0:01:55.695 ******* 2025-02-10 08:57:52.522361 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:52.522950 | orchestrator | 2025-02-10 08:57:52.524851 | orchestrator | TASK [osism.services.nexus : Add docker repositories to global repos list] ***** 2025-02-10 08:57:52.612340 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:00.072) 0:01:55.767 ******* 2025-02-10 08:57:52.612534 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:52.612726 | orchestrator | 2025-02-10 08:57:52.613535 | orchestrator | TASK [osism.services.nexus : Apply defaults to apt proxy repos] **************** 2025-02-10 08:57:52.613966 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:00.090) 0:01:55.858 ******* 2025-02-10 08:57:52.681117 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:52.682641 | orchestrator | 2025-02-10 08:57:52.684627 | orchestrator | TASK [osism.services.nexus : Add apt repositories to global repos list] ******** 2025-02-10 08:57:52.778185 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:00.069) 0:01:55.928 ******* 2025-02-10 08:57:52.778349 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:52.778814 | orchestrator | 2025-02-10 08:57:52.778852 | orchestrator | TASK [osism.services.nexus : Create configured repositories] ******************* 2025-02-10 08:57:52.779034 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:00.094) 0:01:56.022 ******* 2025-02-10 08:57:52.882440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-10 08:57:52.883811 | orchestrator | 2025-02-10 08:57:52.885399 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:52.886500 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:00.105) 0:01:56.128 ******* 2025-02-10 08:57:52.966610 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:52.967443 | orchestrator | 2025-02-10 08:57:52.967511 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-10 08:57:52.967941 | orchestrator | Monday 10 February 2025 08:57:52 +0000 (0:00:00.083) 0:01:56.212 ******* 2025-02-10 08:57:53.050011 | orchestrator | skipping: [testbed-manager] 2025-02-10 08:57:53.050959 | orchestrator | 2025-02-10 08:57:53.051967 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-10 08:57:53.052832 | orchestrator | Monday 10 February 2025 08:57:53 +0000 (0:00:00.084) 0:01:56.296 ******* 2025-02-10 08:57:53.729841 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:53.730980 | orchestrator | 2025-02-10 08:57:53.731648 | orchestrator | TASK [osism.services.nexus : Calling script create_repos_from_list] ************ 2025-02-10 08:57:53.732305 | orchestrator | Monday 10 February 2025 08:57:53 +0000 (0:00:00.678) 0:01:56.975 ******* 2025-02-10 08:57:56.540621 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:56.666928 | orchestrator | 2025-02-10 08:57:56.667064 | orchestrator | TASK [Set osism.nexus.status fact] ********************************************* 2025-02-10 08:57:56.667109 | orchestrator | Monday 10 February 2025 08:57:56 +0000 (0:00:02.811) 0:01:59.787 ******* 2025-02-10 08:57:56.667143 | orchestrator | included: osism.commons.state for testbed-manager 2025-02-10 08:57:56.667349 | orchestrator | 2025-02-10 08:57:56.672331 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-10 08:57:56.672376 | orchestrator | Monday 10 February 2025 08:57:56 +0000 (0:00:00.126) 0:01:59.914 ******* 2025-02-10 08:57:57.052998 | orchestrator | ok: [testbed-manager] 2025-02-10 08:57:57.053648 | orchestrator | 2025-02-10 08:57:57.053931 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-10 08:57:57.054818 | orchestrator | Monday 10 February 2025 08:57:57 +0000 (0:00:00.384) 0:02:00.299 ******* 2025-02-10 08:57:57.663407 | orchestrator | changed: [testbed-manager] 2025-02-10 08:57:57.663684 | orchestrator | 2025-02-10 08:57:57.664274 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:57:57.664568 | orchestrator | 2025-02-10 08:57:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:57:57.664671 | orchestrator | 2025-02-10 08:57:57 | INFO  | Please wait and do not abort execution. 2025-02-10 08:57:57.665408 | orchestrator | testbed-manager : ok=64  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 08:57:57.666148 | orchestrator | 2025-02-10 08:57:57.667019 | orchestrator | 2025-02-10 08:57:57.667759 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 08:57:57.668879 | orchestrator | Monday 10 February 2025 08:57:57 +0000 (0:00:00.609) 0:02:00.908 ******* 2025-02-10 08:57:57.670230 | orchestrator | =============================================================================== 2025-02-10 08:57:57.672042 | orchestrator | osism.services.nexus : Wait for nexus service to start ----------------- 60.09s 2025-02-10 08:57:57.672542 | orchestrator | osism.services.nexus : Wait for an healthy nexus service --------------- 21.01s 2025-02-10 08:57:57.672891 | orchestrator | osism.services.nexus : Provision scripts included in the container image --- 3.61s 2025-02-10 08:57:57.673410 | orchestrator | osism.services.nexus : Calling script create_repos_from_list ------------ 2.81s 2025-02-10 08:57:57.674282 | orchestrator | osism.services.nexus : Calling script update_admin_password ------------- 2.01s 2025-02-10 08:57:57.674664 | orchestrator | osism.services.nexus : Allow anonymous access --------------------------- 1.95s 2025-02-10 08:57:57.675082 | orchestrator | osism.services.nexus : Cleanup default repositories --------------------- 1.92s 2025-02-10 08:57:57.675392 | orchestrator | osism.services.nexus : Copy configuration files ------------------------- 1.91s 2025-02-10 08:57:57.675698 | orchestrator | osism.services.nexus : Manage nexus service ----------------------------- 1.49s 2025-02-10 08:57:57.676555 | orchestrator | osism.services.nexus : Get setup admin password ------------------------- 1.12s 2025-02-10 08:57:57.676745 | orchestrator | osism.services.nexus : Calling script setup_http_proxy ------------------ 1.08s 2025-02-10 08:57:57.676997 | orchestrator | osism.services.nexus : Calling script setup_realms ---------------------- 1.06s 2025-02-10 08:57:57.677501 | orchestrator | osism.services.nexus : Copy docker-compose.yml file --------------------- 1.01s 2025-02-10 08:57:57.677610 | orchestrator | osism.services.nexus : Stop and disable old service docker-compose@nexus --- 0.98s 2025-02-10 08:57:57.678132 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.94s 2025-02-10 08:57:57.679269 | orchestrator | osism.services.nexus : Copy nexus systemd unit file --------------------- 0.87s 2025-02-10 08:57:57.679517 | orchestrator | osism.services.nexus : Create required directories ---------------------- 0.85s 2025-02-10 08:57:57.680341 | orchestrator | osism.services.nexus : Create traefik external network ------------------ 0.83s 2025-02-10 08:57:57.680608 | orchestrator | osism.services.nexus : Declaring script update_admin_password ----------- 0.72s 2025-02-10 08:57:57.681222 | orchestrator | osism.services.nexus : Deleting script setup_http_proxy ----------------- 0.70s 2025-02-10 08:57:58.096164 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-10 08:57:58.101873 | orchestrator | + sh -c '/opt/configuration/scripts/set-docker-registry.sh nexus.testbed.osism.xyz:8193' 2025-02-10 08:57:58.101945 | orchestrator | + set -e 2025-02-10 08:57:58.107859 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 08:57:58.107913 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 08:57:58.107928 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 08:57:58.107943 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 08:57:58.107957 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 08:57:58.107971 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 08:57:58.107987 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 08:57:58.108001 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 08:57:58.108016 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 08:57:58.108030 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 08:57:58.108043 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 08:57:58.108057 | orchestrator | ++ export ARA=false 2025-02-10 08:57:58.108071 | orchestrator | ++ ARA=false 2025-02-10 08:57:58.108103 | orchestrator | ++ export TEMPEST=false 2025-02-10 08:57:58.108117 | orchestrator | ++ TEMPEST=false 2025-02-10 08:57:58.108131 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 08:57:58.108145 | orchestrator | ++ IS_ZUUL=true 2025-02-10 08:57:58.108159 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:57:58.108173 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 08:57:58.108222 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 08:57:58.108236 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 08:57:58.108250 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 08:57:58.108264 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 08:57:58.108278 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 08:57:58.108292 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 08:57:58.108306 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 08:57:58.108319 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 08:57:58.108333 | orchestrator | + DOCKER_REGISTRY=nexus.testbed.osism.xyz:8193 2025-02-10 08:57:58.108347 | orchestrator | + sed -i 's#ceph_docker_registry: .*#ceph_docker_registry: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:57:58.108394 | orchestrator | + sed -i 's#docker_registry_ansible: .*#docker_registry_ansible: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:57:58.112487 | orchestrator | + sed -i 's#docker_registry_kolla: .*#docker_registry_kolla: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:57:58.117621 | orchestrator | + sed -i 's#docker_registry_netbox: .*#docker_registry_netbox: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-10 08:57:58.121872 | orchestrator | + [[ nexus.testbed.osism.xyz:8193 == \o\s\i\s\m\.\h\a\r\b\o\r\.\r\e\g\i\o\.\d\i\g\i\t\a\l ]] 2025-02-10 08:57:58.122149 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 08:57:58.127727 | orchestrator | + sed -i 's/docker_namespace: osism/docker_namespace: kolla/' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-02-10 08:57:58.127786 | orchestrator | + osism apply squid 2025-02-10 08:57:59.688847 | orchestrator | 2025-02-10 08:57:59 | INFO  | Task c8c73304-441a-450e-b43b-9e37c084fc44 (squid) was prepared for execution. 2025-02-10 08:58:02.849722 | orchestrator | 2025-02-10 08:57:59 | INFO  | It takes a moment until task c8c73304-441a-450e-b43b-9e37c084fc44 (squid) has been started and output is visible here. 2025-02-10 08:58:02.849875 | orchestrator | 2025-02-10 08:58:02.852708 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-02-10 08:58:02.853956 | orchestrator | 2025-02-10 08:58:02.854573 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-02-10 08:58:02.954335 | orchestrator | Monday 10 February 2025 08:58:02 +0000 (0:00:00.121) 0:00:00.121 ******* 2025-02-10 08:58:02.954562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 08:58:02.955177 | orchestrator | 2025-02-10 08:58:02.955213 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-02-10 08:58:02.955238 | orchestrator | Monday 10 February 2025 08:58:02 +0000 (0:00:00.100) 0:00:00.222 ******* 2025-02-10 08:58:04.378790 | orchestrator | ok: [testbed-manager] 2025-02-10 08:58:04.379907 | orchestrator | 2025-02-10 08:58:04.380081 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-02-10 08:58:04.380380 | orchestrator | Monday 10 February 2025 08:58:04 +0000 (0:00:01.428) 0:00:01.650 ******* 2025-02-10 08:58:05.601252 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-02-10 08:58:05.601641 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-02-10 08:58:05.601672 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-02-10 08:58:05.601687 | orchestrator | 2025-02-10 08:58:05.601707 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-02-10 08:58:06.715858 | orchestrator | Monday 10 February 2025 08:58:05 +0000 (0:00:01.219) 0:00:02.869 ******* 2025-02-10 08:58:06.717060 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-02-10 08:58:07.084242 | orchestrator | 2025-02-10 08:58:07.084401 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-02-10 08:58:07.084423 | orchestrator | Monday 10 February 2025 08:58:06 +0000 (0:00:01.108) 0:00:03.978 ******* 2025-02-10 08:58:07.084488 | orchestrator | ok: [testbed-manager] 2025-02-10 08:58:07.085173 | orchestrator | 2025-02-10 08:58:07.085214 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-02-10 08:58:07.085775 | orchestrator | Monday 10 February 2025 08:58:07 +0000 (0:00:00.380) 0:00:04.359 ******* 2025-02-10 08:58:08.094317 | orchestrator | changed: [testbed-manager] 2025-02-10 08:58:08.098183 | orchestrator | 2025-02-10 08:58:38.003274 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-02-10 08:58:38.003527 | orchestrator | Monday 10 February 2025 08:58:08 +0000 (0:00:01.009) 0:00:05.368 ******* 2025-02-10 08:58:38.003575 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-02-10 08:58:50.369790 | orchestrator | ok: [testbed-manager] 2025-02-10 08:58:50.369959 | orchestrator | 2025-02-10 08:58:50.369977 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-02-10 08:58:50.369989 | orchestrator | Monday 10 February 2025 08:58:37 +0000 (0:00:29.905) 0:00:35.274 ******* 2025-02-10 08:58:50.370063 | orchestrator | changed: [testbed-manager] 2025-02-10 08:58:50.370402 | orchestrator | 2025-02-10 08:58:50.372207 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-02-10 08:58:50.376872 | orchestrator | Monday 10 February 2025 08:58:50 +0000 (0:00:12.366) 0:00:47.641 ******* 2025-02-10 08:59:50.467225 | orchestrator | Pausing for 60 seconds 2025-02-10 08:59:50.468424 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:50.468536 | orchestrator | 2025-02-10 08:59:50.468587 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-02-10 08:59:50.468690 | orchestrator | Monday 10 February 2025 08:59:50 +0000 (0:01:00.097) 0:01:47.739 ******* 2025-02-10 08:59:50.533657 | orchestrator | ok: [testbed-manager] 2025-02-10 08:59:50.533929 | orchestrator | 2025-02-10 08:59:50.533982 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-02-10 08:59:50.534950 | orchestrator | Monday 10 February 2025 08:59:50 +0000 (0:00:00.068) 0:01:47.807 ******* 2025-02-10 08:59:51.190237 | orchestrator | changed: [testbed-manager] 2025-02-10 08:59:51.190650 | orchestrator | 2025-02-10 08:59:51.190704 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 08:59:51.190857 | orchestrator | 2025-02-10 08:59:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 08:59:51.190880 | orchestrator | 2025-02-10 08:59:51 | INFO  | Please wait and do not abort execution. 2025-02-10 08:59:51.190901 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 08:59:51.192081 | orchestrator | 2025-02-10 08:59:51.192598 | orchestrator | 2025-02-10 08:59:51.193109 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 08:59:51.193368 | orchestrator | Monday 10 February 2025 08:59:51 +0000 (0:00:00.658) 0:01:48.466 ******* 2025-02-10 08:59:51.193852 | orchestrator | =============================================================================== 2025-02-10 08:59:51.194223 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2025-02-10 08:59:51.194542 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 29.91s 2025-02-10 08:59:51.194762 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.37s 2025-02-10 08:59:51.195046 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-02-10 08:59:51.195297 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2025-02-10 08:59:51.195411 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2025-02-10 08:59:51.195730 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.01s 2025-02-10 08:59:51.195962 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-02-10 08:59:51.196218 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-02-10 08:59:51.197909 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-02-10 08:59:51.629122 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-02-10 08:59:51.629277 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-02-10 08:59:53.068490 | orchestrator | 2025-02-10 08:59:53 | INFO  | Task 5bcb3b96-2fe9-45df-b68c-c063bc65bb3d (operator) was prepared for execution. 2025-02-10 08:59:56.248225 | orchestrator | 2025-02-10 08:59:53 | INFO  | It takes a moment until task 5bcb3b96-2fe9-45df-b68c-c063bc65bb3d (operator) has been started and output is visible here. 2025-02-10 08:59:56.248415 | orchestrator | 2025-02-10 08:59:56.249617 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-02-10 08:59:56.249672 | orchestrator | 2025-02-10 08:59:56.249698 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 08:59:56.249762 | orchestrator | Monday 10 February 2025 08:59:56 +0000 (0:00:00.089) 0:00:00.089 ******* 2025-02-10 08:59:59.922147 | orchestrator | ok: [testbed-node-0] 2025-02-10 08:59:59.923809 | orchestrator | ok: [testbed-node-1] 2025-02-10 08:59:59.923859 | orchestrator | ok: [testbed-node-4] 2025-02-10 08:59:59.923874 | orchestrator | ok: [testbed-node-5] 2025-02-10 08:59:59.923899 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:00.742912 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:00.743060 | orchestrator | 2025-02-10 09:00:00.743082 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-02-10 09:00:00.743098 | orchestrator | Monday 10 February 2025 08:59:59 +0000 (0:00:03.677) 0:00:03.766 ******* 2025-02-10 09:00:00.743132 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:00.743866 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:00:00.749332 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:00.749528 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:00.749551 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:00.749564 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:00:00.749577 | orchestrator | 2025-02-10 09:00:00.749591 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-02-10 09:00:00.749604 | orchestrator | 2025-02-10 09:00:00.749622 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-10 09:00:00.749673 | orchestrator | Monday 10 February 2025 09:00:00 +0000 (0:00:00.823) 0:00:04.589 ******* 2025-02-10 09:00:00.834712 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:00:00.862787 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:00:00.881037 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:00.933014 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:00.933760 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:00.933825 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:00.934172 | orchestrator | 2025-02-10 09:00:00.934319 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-10 09:00:00.935971 | orchestrator | Monday 10 February 2025 09:00:00 +0000 (0:00:00.190) 0:00:04.780 ******* 2025-02-10 09:00:01.028980 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:00:01.055165 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:00:01.098658 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:01.100170 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:01.100382 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:01.101862 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:01.102329 | orchestrator | 2025-02-10 09:00:01.102814 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-10 09:00:01.103547 | orchestrator | Monday 10 February 2025 09:00:01 +0000 (0:00:00.166) 0:00:04.946 ******* 2025-02-10 09:00:01.870539 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:01.870949 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:01.870990 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:01.871013 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:01.871678 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:01.872262 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:01.872698 | orchestrator | 2025-02-10 09:00:01.873167 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-10 09:00:01.873809 | orchestrator | Monday 10 February 2025 09:00:01 +0000 (0:00:00.766) 0:00:05.713 ******* 2025-02-10 09:00:02.639544 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:02.640137 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:02.641120 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:02.642064 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:02.642849 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:02.643851 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:02.644579 | orchestrator | 2025-02-10 09:00:02.645945 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-10 09:00:02.646295 | orchestrator | Monday 10 February 2025 09:00:02 +0000 (0:00:00.773) 0:00:06.486 ******* 2025-02-10 09:00:03.816995 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-02-10 09:00:03.817988 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-02-10 09:00:03.818084 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-02-10 09:00:03.819562 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-02-10 09:00:03.820026 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-02-10 09:00:03.820048 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-02-10 09:00:03.822121 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-02-10 09:00:03.822493 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-02-10 09:00:03.824716 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-02-10 09:00:03.825084 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-02-10 09:00:03.825118 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-02-10 09:00:03.825469 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-02-10 09:00:03.826274 | orchestrator | 2025-02-10 09:00:03.827460 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-10 09:00:05.037108 | orchestrator | Monday 10 February 2025 09:00:03 +0000 (0:00:01.175) 0:00:07.662 ******* 2025-02-10 09:00:05.037325 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:05.037614 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:05.038526 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:05.038562 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:05.039985 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:05.041500 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:05.042402 | orchestrator | 2025-02-10 09:00:05.043530 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-10 09:00:05.044218 | orchestrator | Monday 10 February 2025 09:00:05 +0000 (0:00:01.218) 0:00:08.881 ******* 2025-02-10 09:00:06.195352 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-02-10 09:00:06.196009 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-02-10 09:00:06.295762 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-02-10 09:00:06.295966 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:00:06.296085 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:00:06.297349 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:00:06.298398 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:00:06.298496 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:00:06.298734 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-02-10 09:00:06.298777 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-02-10 09:00:06.299074 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-02-10 09:00:06.299722 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-02-10 09:00:06.299972 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-02-10 09:00:06.300672 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-02-10 09:00:06.301165 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-02-10 09:00:06.301449 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:00:06.302778 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:00:06.303918 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:00:06.306001 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:00:06.306878 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:00:06.308546 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-02-10 09:00:06.309329 | orchestrator | 2025-02-10 09:00:06.310704 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-10 09:00:06.311688 | orchestrator | Monday 10 February 2025 09:00:06 +0000 (0:00:01.261) 0:00:10.142 ******* 2025-02-10 09:00:06.862251 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:06.862657 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:06.863064 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:06.863104 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:06.863821 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:06.864054 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:06.864883 | orchestrator | 2025-02-10 09:00:06.865353 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-10 09:00:06.865968 | orchestrator | Monday 10 February 2025 09:00:06 +0000 (0:00:00.565) 0:00:10.708 ******* 2025-02-10 09:00:06.939829 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:00:06.987075 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:00:07.009874 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:00:07.069412 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:07.069941 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:07.069974 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:07.070448 | orchestrator | 2025-02-10 09:00:07.070894 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-10 09:00:07.071367 | orchestrator | Monday 10 February 2025 09:00:07 +0000 (0:00:00.206) 0:00:10.914 ******* 2025-02-10 09:00:07.759179 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-10 09:00:07.759635 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-10 09:00:07.759684 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:07.760599 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:07.761545 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:00:07.762185 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:00:07.763130 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:00:07.763798 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:07.764550 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:07.765272 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:07.768305 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:00:07.772241 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:07.772308 | orchestrator | 2025-02-10 09:00:07.772352 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-10 09:00:07.772893 | orchestrator | Monday 10 February 2025 09:00:07 +0000 (0:00:00.688) 0:00:11.603 ******* 2025-02-10 09:00:07.815850 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:00:07.839166 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:00:07.890335 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:00:07.931961 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:07.932109 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:07.932839 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:07.933120 | orchestrator | 2025-02-10 09:00:07.935941 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-10 09:00:07.936576 | orchestrator | Monday 10 February 2025 09:00:07 +0000 (0:00:00.174) 0:00:11.778 ******* 2025-02-10 09:00:07.976997 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:00:08.000914 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:00:08.020149 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:00:08.044723 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:08.076511 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:08.076699 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:08.076719 | orchestrator | 2025-02-10 09:00:08.077134 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-10 09:00:08.079424 | orchestrator | Monday 10 February 2025 09:00:08 +0000 (0:00:00.145) 0:00:11.923 ******* 2025-02-10 09:00:08.139975 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:00:08.157658 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:00:08.192989 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:00:08.235722 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:08.236101 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:08.237463 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:08.238142 | orchestrator | 2025-02-10 09:00:08.239078 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-10 09:00:08.239927 | orchestrator | Monday 10 February 2025 09:00:08 +0000 (0:00:00.159) 0:00:12.082 ******* 2025-02-10 09:00:08.888678 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:08.889135 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:08.889179 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:08.889401 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:08.890149 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:08.890752 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:08.891247 | orchestrator | 2025-02-10 09:00:08.892252 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-10 09:00:08.892321 | orchestrator | Monday 10 February 2025 09:00:08 +0000 (0:00:00.653) 0:00:12.736 ******* 2025-02-10 09:00:08.976981 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:00:08.999140 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:00:09.125323 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:00:09.125662 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:09.126692 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:09.127652 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:09.128341 | orchestrator | 2025-02-10 09:00:09.128787 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:00:09.129110 | orchestrator | 2025-02-10 09:00:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:00:09.129234 | orchestrator | 2025-02-10 09:00:09 | INFO  | Please wait and do not abort execution. 2025-02-10 09:00:09.129966 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:00:09.130741 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:00:09.131345 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:00:09.132324 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:00:09.133077 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:00:09.133983 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:00:09.134480 | orchestrator | 2025-02-10 09:00:09.134977 | orchestrator | 2025-02-10 09:00:09.135382 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:00:09.135656 | orchestrator | Monday 10 February 2025 09:00:09 +0000 (0:00:00.235) 0:00:12.971 ******* 2025-02-10 09:00:09.135771 | orchestrator | =============================================================================== 2025-02-10 09:00:09.136277 | orchestrator | Gathering Facts --------------------------------------------------------- 3.68s 2025-02-10 09:00:09.136574 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-02-10 09:00:09.136752 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-02-10 09:00:09.137129 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2025-02-10 09:00:09.137377 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2025-02-10 09:00:09.138373 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-02-10 09:00:09.138908 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.77s 2025-02-10 09:00:09.139677 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-02-10 09:00:09.139967 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-02-10 09:00:09.141029 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-02-10 09:00:09.141778 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-02-10 09:00:09.142087 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-02-10 09:00:09.142652 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-02-10 09:00:09.143639 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-02-10 09:00:09.143889 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-02-10 09:00:09.144683 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-02-10 09:00:09.145079 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-02-10 09:00:09.542509 | orchestrator | + osism apply --environment custom facts 2025-02-10 09:00:10.948422 | orchestrator | 2025-02-10 09:00:10 | INFO  | Trying to run play facts in environment custom 2025-02-10 09:00:10.994900 | orchestrator | 2025-02-10 09:00:10 | INFO  | Task 59af0de6-1e24-4067-96ab-ac4c0c48b930 (facts) was prepared for execution. 2025-02-10 09:00:14.170861 | orchestrator | 2025-02-10 09:00:10 | INFO  | It takes a moment until task 59af0de6-1e24-4067-96ab-ac4c0c48b930 (facts) has been started and output is visible here. 2025-02-10 09:00:14.171042 | orchestrator | 2025-02-10 09:00:14.171386 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-02-10 09:00:14.171415 | orchestrator | 2025-02-10 09:00:14.171477 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-10 09:00:14.171810 | orchestrator | Monday 10 February 2025 09:00:14 +0000 (0:00:00.095) 0:00:00.095 ******* 2025-02-10 09:00:15.540922 | orchestrator | ok: [testbed-manager] 2025-02-10 09:00:15.542111 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:15.543318 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:15.544333 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:15.544728 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:15.545596 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:15.545856 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:15.546539 | orchestrator | 2025-02-10 09:00:15.547313 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-02-10 09:00:15.547586 | orchestrator | Monday 10 February 2025 09:00:15 +0000 (0:00:01.370) 0:00:01.466 ******* 2025-02-10 09:00:16.774517 | orchestrator | ok: [testbed-manager] 2025-02-10 09:00:16.775096 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:00:16.775170 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:16.775860 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:00:16.775908 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:16.776496 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:00:16.776940 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:16.778460 | orchestrator | 2025-02-10 09:00:16.778873 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-02-10 09:00:16.779120 | orchestrator | 2025-02-10 09:00:16.779937 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-10 09:00:16.780325 | orchestrator | Monday 10 February 2025 09:00:16 +0000 (0:00:01.232) 0:00:02.699 ******* 2025-02-10 09:00:16.894615 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:16.895244 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:16.897553 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:17.050965 | orchestrator | 2025-02-10 09:00:17.051137 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-10 09:00:17.051162 | orchestrator | Monday 10 February 2025 09:00:16 +0000 (0:00:00.122) 0:00:02.821 ******* 2025-02-10 09:00:17.051199 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:17.051804 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:17.052364 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:17.053551 | orchestrator | 2025-02-10 09:00:17.054093 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-10 09:00:17.054744 | orchestrator | Monday 10 February 2025 09:00:17 +0000 (0:00:00.154) 0:00:02.976 ******* 2025-02-10 09:00:17.185972 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:17.186667 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:17.187186 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:17.187220 | orchestrator | 2025-02-10 09:00:17.187822 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-10 09:00:17.188856 | orchestrator | Monday 10 February 2025 09:00:17 +0000 (0:00:00.137) 0:00:03.113 ******* 2025-02-10 09:00:17.346980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:00:17.348116 | orchestrator | 2025-02-10 09:00:17.348162 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-10 09:00:17.349815 | orchestrator | Monday 10 February 2025 09:00:17 +0000 (0:00:00.158) 0:00:03.272 ******* 2025-02-10 09:00:17.834217 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:17.834557 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:17.834617 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:17.835406 | orchestrator | 2025-02-10 09:00:17.835511 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-10 09:00:17.835871 | orchestrator | Monday 10 February 2025 09:00:17 +0000 (0:00:00.489) 0:00:03.761 ******* 2025-02-10 09:00:17.941103 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:17.942174 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:17.943350 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:17.944009 | orchestrator | 2025-02-10 09:00:17.945072 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-10 09:00:17.946159 | orchestrator | Monday 10 February 2025 09:00:17 +0000 (0:00:00.107) 0:00:03.868 ******* 2025-02-10 09:00:18.906780 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:18.907403 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:18.907423 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:18.908409 | orchestrator | 2025-02-10 09:00:18.909585 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-10 09:00:18.910280 | orchestrator | Monday 10 February 2025 09:00:18 +0000 (0:00:00.962) 0:00:04.831 ******* 2025-02-10 09:00:19.368587 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:19.368802 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:19.369874 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:19.371209 | orchestrator | 2025-02-10 09:00:19.372327 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-10 09:00:19.374182 | orchestrator | Monday 10 February 2025 09:00:19 +0000 (0:00:00.463) 0:00:05.295 ******* 2025-02-10 09:00:20.435318 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:20.436626 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:20.437213 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:20.438647 | orchestrator | 2025-02-10 09:00:20.438772 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-10 09:00:20.439395 | orchestrator | Monday 10 February 2025 09:00:20 +0000 (0:00:01.065) 0:00:06.360 ******* 2025-02-10 09:00:33.824520 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:33.825868 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:33.825914 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:33.825929 | orchestrator | 2025-02-10 09:00:33.826857 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-02-10 09:00:33.826918 | orchestrator | Monday 10 February 2025 09:00:33 +0000 (0:00:13.383) 0:00:19.744 ******* 2025-02-10 09:00:33.878538 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:00:33.933230 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:00:33.933487 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:00:33.934180 | orchestrator | 2025-02-10 09:00:33.936330 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-02-10 09:00:41.368548 | orchestrator | Monday 10 February 2025 09:00:33 +0000 (0:00:00.116) 0:00:19.860 ******* 2025-02-10 09:00:41.368745 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:00:41.369331 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:00:41.370680 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:00:41.370723 | orchestrator | 2025-02-10 09:00:41.371121 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-10 09:00:41.372071 | orchestrator | Monday 10 February 2025 09:00:41 +0000 (0:00:07.433) 0:00:27.294 ******* 2025-02-10 09:00:41.815779 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:41.819858 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:41.819996 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:41.820492 | orchestrator | 2025-02-10 09:00:41.820591 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-10 09:00:41.821947 | orchestrator | Monday 10 February 2025 09:00:41 +0000 (0:00:00.445) 0:00:27.740 ******* 2025-02-10 09:00:45.441974 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-02-10 09:00:45.442404 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-02-10 09:00:45.443664 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-02-10 09:00:45.445405 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-02-10 09:00:45.446121 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-02-10 09:00:45.446706 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-02-10 09:00:45.447128 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-02-10 09:00:45.447846 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-02-10 09:00:45.448308 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-02-10 09:00:45.449062 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-02-10 09:00:45.449329 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-02-10 09:00:45.450569 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-02-10 09:00:45.452097 | orchestrator | 2025-02-10 09:00:45.452206 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-10 09:00:45.453138 | orchestrator | Monday 10 February 2025 09:00:45 +0000 (0:00:03.627) 0:00:31.368 ******* 2025-02-10 09:00:46.505410 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:46.506363 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:46.506488 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:46.506527 | orchestrator | 2025-02-10 09:00:46.506720 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:00:46.507066 | orchestrator | 2025-02-10 09:00:46.507473 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:00:46.510912 | orchestrator | Monday 10 February 2025 09:00:46 +0000 (0:00:01.065) 0:00:32.433 ******* 2025-02-10 09:00:50.374505 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:00:50.374884 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:50.375632 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:00:50.375761 | orchestrator | ok: [testbed-manager] 2025-02-10 09:00:50.375841 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:50.376607 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:50.377012 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:50.377923 | orchestrator | 2025-02-10 09:00:50.378548 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:00:50.379239 | orchestrator | 2025-02-10 09:00:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:00:50.379781 | orchestrator | 2025-02-10 09:00:50 | INFO  | Please wait and do not abort execution. 2025-02-10 09:00:50.380811 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:00:50.380912 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:00:50.381821 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:00:50.382273 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:00:50.383287 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:00:50.384550 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:00:50.384810 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:00:50.385733 | orchestrator | 2025-02-10 09:00:50.386202 | orchestrator | 2025-02-10 09:00:50.387084 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:00:50.387835 | orchestrator | Monday 10 February 2025 09:00:50 +0000 (0:00:03.866) 0:00:36.299 ******* 2025-02-10 09:00:50.388595 | orchestrator | =============================================================================== 2025-02-10 09:00:50.391814 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.38s 2025-02-10 09:00:50.391910 | orchestrator | Install required packages (Debian) -------------------------------------- 7.43s 2025-02-10 09:00:50.391929 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.87s 2025-02-10 09:00:50.391948 | orchestrator | Copy fact files --------------------------------------------------------- 3.63s 2025-02-10 09:00:50.392544 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2025-02-10 09:00:50.392738 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-02-10 09:00:50.393460 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-02-10 09:00:50.393699 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.07s 2025-02-10 09:00:50.394362 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.96s 2025-02-10 09:00:50.394617 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2025-02-10 09:00:50.394986 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-02-10 09:00:50.395495 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-02-10 09:00:50.395969 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-02-10 09:00:50.396341 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-02-10 09:00:50.396719 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-02-10 09:00:50.398273 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-02-10 09:00:50.398706 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-02-10 09:00:50.399101 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-02-10 09:00:50.961754 | orchestrator | + osism apply bootstrap 2025-02-10 09:00:52.433095 | orchestrator | 2025-02-10 09:00:52 | INFO  | Task 0843bf3c-195d-4bc0-8e5e-7d0b67aab2fb (bootstrap) was prepared for execution. 2025-02-10 09:00:55.714893 | orchestrator | 2025-02-10 09:00:52 | INFO  | It takes a moment until task 0843bf3c-195d-4bc0-8e5e-7d0b67aab2fb (bootstrap) has been started and output is visible here. 2025-02-10 09:00:55.715058 | orchestrator | 2025-02-10 09:00:55.717569 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-02-10 09:00:55.718665 | orchestrator | 2025-02-10 09:00:55.719073 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-02-10 09:00:55.719725 | orchestrator | Monday 10 February 2025 09:00:55 +0000 (0:00:00.105) 0:00:00.105 ******* 2025-02-10 09:00:55.784648 | orchestrator | ok: [testbed-manager] 2025-02-10 09:00:55.848869 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:55.890675 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:55.915842 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:56.001695 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:00:56.004621 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:00:56.006830 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:56.007616 | orchestrator | 2025-02-10 09:00:56.010578 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:00:56.011525 | orchestrator | 2025-02-10 09:00:56.011560 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:00:56.012025 | orchestrator | Monday 10 February 2025 09:00:55 +0000 (0:00:00.288) 0:00:00.393 ******* 2025-02-10 09:00:59.807306 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:00:59.808159 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:00:59.808220 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:00:59.808257 | orchestrator | ok: [testbed-manager] 2025-02-10 09:00:59.809785 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:00:59.810237 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:00:59.810783 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:00:59.811330 | orchestrator | 2025-02-10 09:00:59.812202 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-02-10 09:00:59.813046 | orchestrator | 2025-02-10 09:00:59.813382 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:00:59.814512 | orchestrator | Monday 10 February 2025 09:00:59 +0000 (0:00:03.806) 0:00:04.200 ******* 2025-02-10 09:00:59.913273 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-10 09:00:59.913604 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-10 09:00:59.913854 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-10 09:00:59.940906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-02-10 09:00:59.965756 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-10 09:00:59.965879 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-10 09:00:59.967870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-02-10 09:01:00.328652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:01:00.328791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-10 09:01:00.328991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:01:00.330838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-02-10 09:01:00.331204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:01:00.332380 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-10 09:01:00.333310 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:00.333833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:01:00.335485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:01:00.335730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:01:00.335764 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-02-10 09:01:00.336328 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:01:00.336907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:01:00.337350 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:01:00.338005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:01:00.338694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-02-10 09:01:00.339165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:01:00.339742 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-02-10 09:01:00.340445 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:01:00.340888 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:01:00.341514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:01:00.341790 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:01:00.342180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:01:00.342741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:01:00.343263 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:01:00.343557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:01:00.343835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:01:00.344766 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:01:00.344843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:01:00.345339 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:01:00.345710 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:01:00.346134 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:01:00.346542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:01:00.346943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:01:00.347126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:01:00.347949 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:00.348022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:01:00.348356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:01:00.348522 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:00.349124 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:01:00.349393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:01:00.349772 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:00.349801 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:00.350046 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:01:00.350287 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:00.351298 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:01:00.351740 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:01:00.351771 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:01:00.352183 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:00.352205 | orchestrator | 2025-02-10 09:01:00.352218 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-02-10 09:01:00.352704 | orchestrator | 2025-02-10 09:01:00.352840 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-02-10 09:01:00.352886 | orchestrator | Monday 10 February 2025 09:01:00 +0000 (0:00:00.521) 0:00:04.722 ******* 2025-02-10 09:01:00.404508 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:00.431959 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:00.457878 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:00.488231 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:00.546148 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:00.546361 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:00.547311 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:00.547350 | orchestrator | 2025-02-10 09:01:00.548196 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-02-10 09:01:00.548295 | orchestrator | Monday 10 February 2025 09:01:00 +0000 (0:00:00.217) 0:00:04.939 ******* 2025-02-10 09:01:01.754145 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:01.754447 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:01.754492 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:01.754525 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:01.755354 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:01.755398 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:01.756142 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:01.756992 | orchestrator | 2025-02-10 09:01:01.757458 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-02-10 09:01:01.758325 | orchestrator | Monday 10 February 2025 09:01:01 +0000 (0:00:01.206) 0:00:06.146 ******* 2025-02-10 09:01:03.126362 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:03.127761 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:03.127802 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:03.127817 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:03.127832 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:03.127854 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:03.128260 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:03.128687 | orchestrator | 2025-02-10 09:01:03.128979 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-02-10 09:01:03.129481 | orchestrator | Monday 10 February 2025 09:01:03 +0000 (0:00:01.366) 0:00:07.513 ******* 2025-02-10 09:01:03.413064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:03.413585 | orchestrator | 2025-02-10 09:01:03.413620 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-02-10 09:01:03.413644 | orchestrator | Monday 10 February 2025 09:01:03 +0000 (0:00:00.292) 0:00:07.805 ******* 2025-02-10 09:01:05.589468 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:05.591804 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:05.593147 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:05.593872 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:05.594820 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:05.595571 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:05.596035 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:05.597147 | orchestrator | 2025-02-10 09:01:05.597877 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-02-10 09:01:05.598586 | orchestrator | Monday 10 February 2025 09:01:05 +0000 (0:00:02.174) 0:00:09.980 ******* 2025-02-10 09:01:05.667026 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:05.848351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:05.849027 | orchestrator | 2025-02-10 09:01:05.849922 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-02-10 09:01:05.850712 | orchestrator | Monday 10 February 2025 09:01:05 +0000 (0:00:00.261) 0:00:10.241 ******* 2025-02-10 09:01:06.892171 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:06.892498 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:06.892532 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:06.892547 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:06.892980 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:06.893953 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:06.894366 | orchestrator | 2025-02-10 09:01:06.895732 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-02-10 09:01:06.896575 | orchestrator | Monday 10 February 2025 09:01:06 +0000 (0:00:01.039) 0:00:11.281 ******* 2025-02-10 09:01:06.965693 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:07.509412 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:07.510000 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:07.510630 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:07.511616 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:07.513027 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:07.513215 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:07.514051 | orchestrator | 2025-02-10 09:01:07.514540 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-02-10 09:01:07.515596 | orchestrator | Monday 10 February 2025 09:01:07 +0000 (0:00:00.620) 0:00:11.902 ******* 2025-02-10 09:01:07.607529 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:07.634184 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:07.659915 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:07.940328 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:07.943353 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:07.944491 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:07.945686 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:07.946820 | orchestrator | 2025-02-10 09:01:07.948289 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-10 09:01:07.948886 | orchestrator | Monday 10 February 2025 09:01:07 +0000 (0:00:00.430) 0:00:12.332 ******* 2025-02-10 09:01:08.019916 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:08.042382 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:08.065091 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:08.168076 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:08.168340 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:08.168383 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:08.169329 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:08.169364 | orchestrator | 2025-02-10 09:01:08.169679 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-10 09:01:08.169943 | orchestrator | Monday 10 February 2025 09:01:08 +0000 (0:00:00.228) 0:00:12.561 ******* 2025-02-10 09:01:08.468729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:08.468952 | orchestrator | 2025-02-10 09:01:08.468984 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-10 09:01:08.469849 | orchestrator | Monday 10 February 2025 09:01:08 +0000 (0:00:00.300) 0:00:12.862 ******* 2025-02-10 09:01:08.784612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:08.784912 | orchestrator | 2025-02-10 09:01:08.786078 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-10 09:01:08.786465 | orchestrator | Monday 10 February 2025 09:01:08 +0000 (0:00:00.313) 0:00:13.175 ******* 2025-02-10 09:01:10.120315 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:10.121372 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:10.121542 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:10.121643 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:10.122168 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:10.122982 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:10.123864 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:10.124403 | orchestrator | 2025-02-10 09:01:10.125132 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-10 09:01:10.126106 | orchestrator | Monday 10 February 2025 09:01:10 +0000 (0:00:01.334) 0:00:14.509 ******* 2025-02-10 09:01:10.204211 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:10.230290 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:10.257441 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:10.281107 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:10.343291 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:10.344652 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:10.346833 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:10.348998 | orchestrator | 2025-02-10 09:01:10.350073 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-10 09:01:10.350735 | orchestrator | Monday 10 February 2025 09:01:10 +0000 (0:00:00.224) 0:00:14.734 ******* 2025-02-10 09:01:10.897877 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:10.898077 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:10.899948 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:10.901686 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:10.902786 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:10.903942 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:10.904916 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:10.905815 | orchestrator | 2025-02-10 09:01:10.906669 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-10 09:01:10.907585 | orchestrator | Monday 10 February 2025 09:01:10 +0000 (0:00:00.554) 0:00:15.289 ******* 2025-02-10 09:01:10.974982 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:11.034108 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:11.061919 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:11.133771 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:11.134677 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:11.135808 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:11.136885 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:11.137103 | orchestrator | 2025-02-10 09:01:11.137814 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-10 09:01:11.138328 | orchestrator | Monday 10 February 2025 09:01:11 +0000 (0:00:00.237) 0:00:15.526 ******* 2025-02-10 09:01:11.681043 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:11.681562 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:11.681611 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:11.682258 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:11.684010 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:11.684459 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:11.685742 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:11.687022 | orchestrator | 2025-02-10 09:01:11.687620 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-10 09:01:11.689087 | orchestrator | Monday 10 February 2025 09:01:11 +0000 (0:00:00.547) 0:00:16.073 ******* 2025-02-10 09:01:12.810085 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:12.812396 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:12.816035 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:12.816471 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:12.816862 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:12.817242 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:12.817650 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:12.818100 | orchestrator | 2025-02-10 09:01:12.818511 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-10 09:01:12.818919 | orchestrator | Monday 10 February 2025 09:01:12 +0000 (0:00:01.124) 0:00:17.198 ******* 2025-02-10 09:01:14.088217 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:14.088715 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:14.089106 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:14.090821 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:14.091174 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:14.091671 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:14.092355 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:14.093142 | orchestrator | 2025-02-10 09:01:14.093817 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-10 09:01:14.094288 | orchestrator | Monday 10 February 2025 09:01:14 +0000 (0:00:01.279) 0:00:18.478 ******* 2025-02-10 09:01:14.492556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:14.494271 | orchestrator | 2025-02-10 09:01:14.494899 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-10 09:01:14.495953 | orchestrator | Monday 10 February 2025 09:01:14 +0000 (0:00:00.406) 0:00:18.884 ******* 2025-02-10 09:01:14.568804 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:15.814463 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:16.113714 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:16.113802 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:16.142643 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:16.142783 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:16.142802 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:16.142817 | orchestrator | 2025-02-10 09:01:16.142834 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-10 09:01:16.142849 | orchestrator | Monday 10 February 2025 09:01:15 +0000 (0:00:01.321) 0:00:20.206 ******* 2025-02-10 09:01:16.142863 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:16.142878 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:16.142892 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:16.142906 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:16.142921 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:16.142934 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:16.142948 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:16.142962 | orchestrator | 2025-02-10 09:01:16.142976 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-10 09:01:16.142992 | orchestrator | Monday 10 February 2025 09:01:16 +0000 (0:00:00.250) 0:00:20.457 ******* 2025-02-10 09:01:16.143024 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:16.176612 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:16.203294 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:16.241171 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:16.311326 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:16.311857 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:16.312286 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:16.313174 | orchestrator | 2025-02-10 09:01:16.315962 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-10 09:01:16.404152 | orchestrator | Monday 10 February 2025 09:01:16 +0000 (0:00:00.247) 0:00:20.704 ******* 2025-02-10 09:01:16.404309 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:16.438218 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:16.462629 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:16.491387 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:16.569558 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:16.569875 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:16.569924 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:16.573148 | orchestrator | 2025-02-10 09:01:16.848651 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-10 09:01:16.848797 | orchestrator | Monday 10 February 2025 09:01:16 +0000 (0:00:00.256) 0:00:20.961 ******* 2025-02-10 09:01:16.848840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:16.851082 | orchestrator | 2025-02-10 09:01:17.389088 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-10 09:01:17.389228 | orchestrator | Monday 10 February 2025 09:01:16 +0000 (0:00:00.277) 0:00:21.239 ******* 2025-02-10 09:01:17.389267 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:17.389336 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:17.389752 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:17.390135 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:17.390611 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:17.393292 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:17.467021 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:17.467198 | orchestrator | 2025-02-10 09:01:17.467219 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-10 09:01:17.467260 | orchestrator | Monday 10 February 2025 09:01:17 +0000 (0:00:00.539) 0:00:21.779 ******* 2025-02-10 09:01:17.467297 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:17.490183 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:17.518258 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:17.542563 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:17.616080 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:17.616625 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:17.616673 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:17.617673 | orchestrator | 2025-02-10 09:01:17.617984 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-10 09:01:17.622595 | orchestrator | Monday 10 February 2025 09:01:17 +0000 (0:00:00.229) 0:00:22.009 ******* 2025-02-10 09:01:18.705706 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:18.705985 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:18.706084 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:18.706102 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:18.706207 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:18.706237 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:18.706299 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:18.706321 | orchestrator | 2025-02-10 09:01:18.706493 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-10 09:01:18.708859 | orchestrator | Monday 10 February 2025 09:01:18 +0000 (0:00:01.081) 0:00:23.090 ******* 2025-02-10 09:01:19.258638 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:19.258866 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:19.260281 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:19.260318 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:19.261548 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:19.263037 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:19.263850 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:19.263881 | orchestrator | 2025-02-10 09:01:19.265213 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-10 09:01:19.266259 | orchestrator | Monday 10 February 2025 09:01:19 +0000 (0:00:00.559) 0:00:23.649 ******* 2025-02-10 09:01:20.505821 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:20.506410 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:20.506491 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:20.507291 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:20.508134 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:20.508574 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:20.509693 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:20.510073 | orchestrator | 2025-02-10 09:01:20.510951 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-10 09:01:20.511777 | orchestrator | Monday 10 February 2025 09:01:20 +0000 (0:00:01.246) 0:00:24.896 ******* 2025-02-10 09:01:33.859481 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:33.859930 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:33.859976 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:33.861267 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:33.861864 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:33.862453 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:33.863262 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:33.864111 | orchestrator | 2025-02-10 09:01:33.864658 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-02-10 09:01:33.865472 | orchestrator | Monday 10 February 2025 09:01:33 +0000 (0:00:13.350) 0:00:38.247 ******* 2025-02-10 09:01:33.957106 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:33.983080 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:34.012268 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:34.073130 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:34.073279 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:34.074117 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:34.074635 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:34.074974 | orchestrator | 2025-02-10 09:01:34.075457 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-02-10 09:01:34.076116 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.219) 0:00:38.466 ******* 2025-02-10 09:01:34.153836 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:34.184044 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:34.215962 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:34.256569 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:34.330100 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:34.330694 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:34.330730 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:34.331345 | orchestrator | 2025-02-10 09:01:34.331659 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-02-10 09:01:34.332070 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.256) 0:00:38.723 ******* 2025-02-10 09:01:34.419296 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:34.451223 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:34.482138 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:34.510859 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:34.585012 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:34.585504 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:34.586740 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:34.588214 | orchestrator | 2025-02-10 09:01:34.588903 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-02-10 09:01:34.590241 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.253) 0:00:38.977 ******* 2025-02-10 09:01:34.932065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:34.933846 | orchestrator | 2025-02-10 09:01:34.933915 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-02-10 09:01:34.934947 | orchestrator | Monday 10 February 2025 09:01:34 +0000 (0:00:00.345) 0:00:39.323 ******* 2025-02-10 09:01:36.834570 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:36.834660 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:36.834684 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:36.835355 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:36.836300 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:36.837073 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:36.837823 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:36.838097 | orchestrator | 2025-02-10 09:01:36.838563 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-02-10 09:01:36.839004 | orchestrator | Monday 10 February 2025 09:01:36 +0000 (0:00:01.900) 0:00:41.223 ******* 2025-02-10 09:01:37.899242 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:37.899819 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:37.899854 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:37.901305 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:37.901630 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:37.901652 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:37.902156 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:37.902610 | orchestrator | 2025-02-10 09:01:37.903033 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-02-10 09:01:37.903387 | orchestrator | Monday 10 February 2025 09:01:37 +0000 (0:00:01.066) 0:00:42.290 ******* 2025-02-10 09:01:38.749774 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:38.750891 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:38.750969 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:38.752163 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:38.752266 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:38.752290 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:38.752710 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:38.753088 | orchestrator | 2025-02-10 09:01:38.753120 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-02-10 09:01:38.754134 | orchestrator | Monday 10 February 2025 09:01:38 +0000 (0:00:00.850) 0:00:43.141 ******* 2025-02-10 09:01:39.071750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:39.072016 | orchestrator | 2025-02-10 09:01:39.072051 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-02-10 09:01:39.073092 | orchestrator | Monday 10 February 2025 09:01:39 +0000 (0:00:00.319) 0:00:43.460 ******* 2025-02-10 09:01:40.207783 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:40.208124 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:40.208878 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:40.209625 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:40.210080 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:40.210955 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:40.211888 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:40.212248 | orchestrator | 2025-02-10 09:01:40.212740 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-02-10 09:01:40.213144 | orchestrator | Monday 10 February 2025 09:01:40 +0000 (0:00:01.138) 0:00:44.599 ******* 2025-02-10 09:01:40.298880 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:01:40.327918 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:01:40.358294 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:01:40.383900 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:01:40.534291 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:01:40.535286 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:01:40.538723 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:01:53.075656 | orchestrator | 2025-02-10 09:01:53.075822 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-02-10 09:01:53.075844 | orchestrator | Monday 10 February 2025 09:01:40 +0000 (0:00:00.328) 0:00:44.927 ******* 2025-02-10 09:01:53.075878 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:53.078149 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:53.078185 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:53.078202 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:53.078217 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:53.078233 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:53.078248 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:53.078263 | orchestrator | 2025-02-10 09:01:53.078286 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-02-10 09:01:53.079801 | orchestrator | Monday 10 February 2025 09:01:53 +0000 (0:00:12.535) 0:00:57.462 ******* 2025-02-10 09:01:54.671552 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:54.674315 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:54.675399 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:54.675472 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:54.675497 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:54.676127 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:54.676827 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:54.677518 | orchestrator | 2025-02-10 09:01:54.678170 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-02-10 09:01:54.678822 | orchestrator | Monday 10 February 2025 09:01:54 +0000 (0:00:01.599) 0:00:59.062 ******* 2025-02-10 09:01:55.609068 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:55.612193 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:55.613642 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:55.614570 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:55.615692 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:55.616713 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:55.618287 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:55.619058 | orchestrator | 2025-02-10 09:01:55.619315 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-02-10 09:01:55.620039 | orchestrator | Monday 10 February 2025 09:01:55 +0000 (0:00:00.937) 0:01:00.000 ******* 2025-02-10 09:01:55.663619 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-10 09:01:55.691041 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:55.715683 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:55.751055 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:55.775969 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:55.859792 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:55.860182 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:55.860825 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:55.861598 | orchestrator | 2025-02-10 09:01:55.862088 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-02-10 09:01:55.862610 | orchestrator | Monday 10 February 2025 09:01:55 +0000 (0:00:00.252) 0:01:00.252 ******* 2025-02-10 09:01:55.966620 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:55.994903 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:56.039477 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:56.070900 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:56.142243 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:56.142543 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:56.143663 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:56.143947 | orchestrator | 2025-02-10 09:01:56.144921 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-02-10 09:01:56.145122 | orchestrator | Monday 10 February 2025 09:01:56 +0000 (0:00:00.281) 0:01:00.534 ******* 2025-02-10 09:01:56.480722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:01:56.480944 | orchestrator | 2025-02-10 09:01:56.481311 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-02-10 09:01:56.482664 | orchestrator | Monday 10 February 2025 09:01:56 +0000 (0:00:00.338) 0:01:00.873 ******* 2025-02-10 09:01:58.221829 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:58.222943 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:58.223067 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:58.223736 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:58.224569 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:58.226128 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:58.226620 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:58.227534 | orchestrator | 2025-02-10 09:01:58.227803 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-02-10 09:01:58.228461 | orchestrator | Monday 10 February 2025 09:01:58 +0000 (0:00:01.739) 0:01:02.613 ******* 2025-02-10 09:01:58.814851 | orchestrator | changed: [testbed-manager] 2025-02-10 09:01:58.815203 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:01:58.815805 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:01:58.816632 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:01:58.817114 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:01:58.817470 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:01:58.818294 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:01:58.818580 | orchestrator | 2025-02-10 09:01:58.819255 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-02-10 09:01:58.819707 | orchestrator | Monday 10 February 2025 09:01:58 +0000 (0:00:00.595) 0:01:03.208 ******* 2025-02-10 09:01:58.891977 | orchestrator | ok: [testbed-manager] 2025-02-10 09:01:58.924887 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:01:58.957141 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:01:58.988166 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:01:59.064750 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:01:59.065173 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:01:59.065233 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:01:59.065351 | orchestrator | 2025-02-10 09:01:59.068528 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-02-10 09:01:59.068690 | orchestrator | Monday 10 February 2025 09:01:59 +0000 (0:00:00.249) 0:01:03.457 ******* 2025-02-10 09:02:00.198664 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:00.199627 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:00.199838 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:00.199953 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:00.201477 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:00.201830 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:00.202758 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:00.203626 | orchestrator | 2025-02-10 09:02:00.203882 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-02-10 09:02:00.204488 | orchestrator | Monday 10 February 2025 09:02:00 +0000 (0:00:01.131) 0:01:04.589 ******* 2025-02-10 09:02:01.905357 | orchestrator | changed: [testbed-manager] 2025-02-10 09:02:01.905968 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:02:01.906009 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:02:01.906162 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:02:01.907196 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:02:01.911200 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:02:01.911563 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:02:01.912167 | orchestrator | 2025-02-10 09:02:01.912853 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-02-10 09:02:01.914068 | orchestrator | Monday 10 February 2025 09:02:01 +0000 (0:00:01.705) 0:01:06.295 ******* 2025-02-10 09:02:04.290379 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:04.291842 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:04.293069 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:04.293832 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:04.294765 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:04.295341 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:04.296057 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:04.296629 | orchestrator | 2025-02-10 09:02:04.297226 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-02-10 09:02:04.298149 | orchestrator | Monday 10 February 2025 09:02:04 +0000 (0:00:02.381) 0:01:08.676 ******* 2025-02-10 09:02:41.890370 | orchestrator | ok: [testbed-manager] 2025-02-10 09:02:41.890710 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:02:41.890758 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:02:41.890797 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:02:41.890889 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:02:41.893171 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:02:41.894011 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:02:41.894366 | orchestrator | 2025-02-10 09:02:41.894971 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-02-10 09:02:41.896105 | orchestrator | Monday 10 February 2025 09:02:41 +0000 (0:00:37.601) 0:01:46.278 ******* 2025-02-10 09:03:57.507462 | orchestrator | changed: [testbed-manager] 2025-02-10 09:03:59.257970 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:03:59.258150 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:03:59.258166 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:03:59.258176 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:03:59.258184 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:03:59.258190 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:03:59.258223 | orchestrator | 2025-02-10 09:03:59.258230 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-02-10 09:03:59.258236 | orchestrator | Monday 10 February 2025 09:03:57 +0000 (0:01:15.616) 0:03:01.894 ******* 2025-02-10 09:03:59.258282 | orchestrator | ok: [testbed-manager] 2025-02-10 09:03:59.258326 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:03:59.258336 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:03:59.259782 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:03:59.260094 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:03:59.261527 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:03:59.262582 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:03:59.263234 | orchestrator | 2025-02-10 09:03:59.263958 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-02-10 09:03:59.264669 | orchestrator | Monday 10 February 2025 09:03:59 +0000 (0:00:01.754) 0:03:03.649 ******* 2025-02-10 09:04:05.568876 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:05.570564 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:05.571556 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:05.572399 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:05.573177 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:05.573862 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:05.575701 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:05.576317 | orchestrator | 2025-02-10 09:04:05.576843 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-02-10 09:04:05.577545 | orchestrator | Monday 10 February 2025 09:04:05 +0000 (0:00:06.309) 0:03:09.958 ******* 2025-02-10 09:04:05.957704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-02-10 09:04:05.957979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-02-10 09:04:05.958665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-02-10 09:04:05.959321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-02-10 09:04:05.959748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-02-10 09:04:05.960467 | orchestrator | 2025-02-10 09:04:05.961525 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-02-10 09:04:05.962004 | orchestrator | Monday 10 February 2025 09:04:05 +0000 (0:00:00.390) 0:03:10.349 ******* 2025-02-10 09:04:06.020133 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:04:06.020457 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:04:06.049835 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:06.081343 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:04:06.122930 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:04:06.123080 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-10 09:04:06.155749 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:04:06.155930 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:04:06.640732 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:04:06.640859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:04:06.642475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:04:06.643526 | orchestrator | 2025-02-10 09:04:06.644281 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-02-10 09:04:06.644872 | orchestrator | Monday 10 February 2025 09:04:06 +0000 (0:00:00.682) 0:03:11.032 ******* 2025-02-10 09:04:06.701299 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:04:06.701506 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:04:06.702232 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:04:06.758371 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:04:06.758601 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:04:06.759866 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:04:06.760623 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:04:06.761474 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:04:06.762448 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:04:06.763475 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:04:06.763609 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:04:06.765026 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:04:06.765137 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:04:06.766669 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:04:06.767693 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:04:06.768774 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:04:06.769909 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:04:06.770734 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:04:06.772239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:04:06.773223 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:04:06.777827 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:04:06.777994 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:04:06.780671 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:04:06.781410 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:04:06.805444 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:06.805648 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:04:06.806435 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:04:06.807789 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:04:06.808962 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:04:06.848095 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:04:06.848226 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:04:06.848301 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:04:06.848945 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-10 09:04:06.850006 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-10 09:04:06.850166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-10 09:04:06.851139 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-10 09:04:06.851621 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-10 09:04:06.851740 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-10 09:04:06.851866 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-10 09:04:06.851998 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-10 09:04:06.852109 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-10 09:04:06.852454 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-10 09:04:06.877131 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:04:13.591067 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:04:13.591285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-10 09:04:13.591736 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-10 09:04:13.591771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-10 09:04:13.592443 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-10 09:04:13.593782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-10 09:04:13.595588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-10 09:04:13.595840 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-10 09:04:13.596341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-10 09:04:13.596368 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-10 09:04:13.596420 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-10 09:04:13.596783 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-10 09:04:13.597321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-10 09:04:13.597651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-10 09:04:13.598404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-10 09:04:13.598628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-10 09:04:13.598964 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-10 09:04:13.599362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-10 09:04:13.599752 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-10 09:04:13.600253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-10 09:04:13.600980 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-10 09:04:13.601422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-10 09:04:13.601873 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-10 09:04:13.602712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-10 09:04:13.603031 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-10 09:04:13.603468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-10 09:04:13.603777 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-10 09:04:13.604278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-10 09:04:13.604630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-10 09:04:13.605034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-10 09:04:13.605412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-10 09:04:13.605623 | orchestrator | 2025-02-10 09:04:13.606739 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-02-10 09:04:15.140843 | orchestrator | Monday 10 February 2025 09:04:13 +0000 (0:00:06.949) 0:03:17.981 ******* 2025-02-10 09:04:15.141029 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.141105 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.141125 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.141144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.141276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.141633 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.141901 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-10 09:04:15.143306 | orchestrator | 2025-02-10 09:04:15.143536 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-02-10 09:04:15.143641 | orchestrator | Monday 10 February 2025 09:04:15 +0000 (0:00:01.550) 0:03:19.532 ******* 2025-02-10 09:04:15.194163 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:04:15.249941 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:15.316167 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:04:15.318319 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:04:16.646510 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:04:16.647726 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:04:16.648075 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-10 09:04:16.649453 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:04:16.650639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-10 09:04:16.650755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-10 09:04:16.652819 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-10 09:04:16.653668 | orchestrator | 2025-02-10 09:04:16.654642 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-02-10 09:04:16.654985 | orchestrator | Monday 10 February 2025 09:04:16 +0000 (0:00:01.505) 0:03:21.037 ******* 2025-02-10 09:04:16.715978 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:04:16.742788 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:16.825573 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:04:17.271060 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:04:17.273636 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:04:17.273917 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:04:17.274338 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-10 09:04:17.274865 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:04:17.275501 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-10 09:04:17.275948 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-10 09:04:17.276449 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-10 09:04:17.277216 | orchestrator | 2025-02-10 09:04:17.277631 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-02-10 09:04:17.278122 | orchestrator | Monday 10 February 2025 09:04:17 +0000 (0:00:00.624) 0:03:21.661 ******* 2025-02-10 09:04:17.360738 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:17.391751 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:04:17.422704 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:04:17.445358 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:04:17.574914 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:04:17.575680 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:04:17.576349 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:04:17.576732 | orchestrator | 2025-02-10 09:04:17.577920 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-02-10 09:04:17.578253 | orchestrator | Monday 10 February 2025 09:04:17 +0000 (0:00:00.304) 0:03:21.966 ******* 2025-02-10 09:04:22.920026 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:22.923707 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:22.923798 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:22.923870 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:22.923888 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:22.923902 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:22.923921 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:22.924772 | orchestrator | 2025-02-10 09:04:22.925346 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-02-10 09:04:22.925768 | orchestrator | Monday 10 February 2025 09:04:22 +0000 (0:00:05.344) 0:03:27.311 ******* 2025-02-10 09:04:23.006258 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-02-10 09:04:23.040631 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-02-10 09:04:23.040745 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:23.098658 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:04:23.098933 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-02-10 09:04:23.136646 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-02-10 09:04:23.136826 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:04:23.136904 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-02-10 09:04:23.195417 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:04:23.196340 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-02-10 09:04:23.289774 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:04:23.292130 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:04:23.295184 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-02-10 09:04:23.296316 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:04:23.298896 | orchestrator | 2025-02-10 09:04:23.300477 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-02-10 09:04:23.301099 | orchestrator | Monday 10 February 2025 09:04:23 +0000 (0:00:00.369) 0:03:27.680 ******* 2025-02-10 09:04:24.421802 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-02-10 09:04:24.421951 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-02-10 09:04:24.421966 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-02-10 09:04:24.421979 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-02-10 09:04:24.422354 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-02-10 09:04:24.424178 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-02-10 09:04:24.424677 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-02-10 09:04:24.425407 | orchestrator | 2025-02-10 09:04:24.425815 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-02-10 09:04:24.426261 | orchestrator | Monday 10 February 2025 09:04:24 +0000 (0:00:01.131) 0:03:28.811 ******* 2025-02-10 09:04:24.875742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:04:24.875938 | orchestrator | 2025-02-10 09:04:24.875972 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-02-10 09:04:24.876328 | orchestrator | Monday 10 February 2025 09:04:24 +0000 (0:00:00.455) 0:03:29.267 ******* 2025-02-10 09:04:26.210718 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:26.212819 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:26.213906 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:26.214463 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:26.215506 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:26.216122 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:26.217673 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:26.217873 | orchestrator | 2025-02-10 09:04:26.218446 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-02-10 09:04:26.219280 | orchestrator | Monday 10 February 2025 09:04:26 +0000 (0:00:01.333) 0:03:30.601 ******* 2025-02-10 09:04:26.831637 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:26.832237 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:26.832296 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:26.832325 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:26.832703 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:26.832861 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:26.833979 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:26.834668 | orchestrator | 2025-02-10 09:04:26.834916 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-02-10 09:04:26.834969 | orchestrator | Monday 10 February 2025 09:04:26 +0000 (0:00:00.620) 0:03:31.222 ******* 2025-02-10 09:04:27.497143 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:27.497321 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:27.497346 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:27.498717 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:27.499100 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:27.499728 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:27.502083 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:27.502569 | orchestrator | 2025-02-10 09:04:27.503601 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-02-10 09:04:27.504101 | orchestrator | Monday 10 February 2025 09:04:27 +0000 (0:00:00.666) 0:03:31.888 ******* 2025-02-10 09:04:28.128328 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:28.128745 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:28.130256 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:28.131717 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:28.133518 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:28.134508 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:28.135438 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:28.136313 | orchestrator | 2025-02-10 09:04:28.137305 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-02-10 09:04:28.137935 | orchestrator | Monday 10 February 2025 09:04:28 +0000 (0:00:00.631) 0:03:32.520 ******* 2025-02-10 09:04:29.168460 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739176544.606488, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.168711 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739177996.622358, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.168790 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739177996.52779, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.169009 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739177996.5702486, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.170197 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739177996.6129305, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.170305 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739177996.58061, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.170793 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739177996.6183012, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.171551 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176571.7654803, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.171787 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176477.9893582, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.172307 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176474.615986, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.172669 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176470.1681492, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.173040 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176465.3390036, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.173628 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176474.2099233, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.174363 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739176470.465304, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:04:29.174561 | orchestrator | 2025-02-10 09:04:29.175179 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-02-10 09:04:29.175624 | orchestrator | Monday 10 February 2025 09:04:29 +0000 (0:00:01.040) 0:03:33.560 ******* 2025-02-10 09:04:30.372971 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:30.373778 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:30.373825 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:30.375109 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:30.376127 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:30.376297 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:30.376325 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:30.376803 | orchestrator | 2025-02-10 09:04:30.377240 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-02-10 09:04:30.377724 | orchestrator | Monday 10 February 2025 09:04:30 +0000 (0:00:01.201) 0:03:34.762 ******* 2025-02-10 09:04:31.677016 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:31.677533 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:31.677618 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:31.677641 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:31.677685 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:31.677818 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:31.677835 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:31.678433 | orchestrator | 2025-02-10 09:04:31.678763 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-02-10 09:04:31.678897 | orchestrator | Monday 10 February 2025 09:04:31 +0000 (0:00:01.306) 0:03:36.068 ******* 2025-02-10 09:04:32.918983 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:32.919594 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:32.919632 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:32.919658 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:32.921757 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:32.922178 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:32.922661 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:32.923247 | orchestrator | 2025-02-10 09:04:32.923585 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-02-10 09:04:32.924058 | orchestrator | Monday 10 February 2025 09:04:32 +0000 (0:00:01.235) 0:03:37.304 ******* 2025-02-10 09:04:33.027216 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:04:33.064616 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:04:33.098306 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:04:33.132312 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:04:33.190915 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:04:33.191073 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:04:33.191158 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:04:33.192465 | orchestrator | 2025-02-10 09:04:33.193928 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-02-10 09:04:33.195036 | orchestrator | Monday 10 February 2025 09:04:33 +0000 (0:00:00.279) 0:03:37.584 ******* 2025-02-10 09:04:34.043222 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:34.043520 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:34.043617 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:34.043736 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:34.044766 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:34.045256 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:34.045706 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:34.046103 | orchestrator | 2025-02-10 09:04:34.047237 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-02-10 09:04:34.047334 | orchestrator | Monday 10 February 2025 09:04:34 +0000 (0:00:00.850) 0:03:38.434 ******* 2025-02-10 09:04:34.480261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:04:34.482866 | orchestrator | 2025-02-10 09:04:34.482925 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-02-10 09:04:34.484164 | orchestrator | Monday 10 February 2025 09:04:34 +0000 (0:00:00.438) 0:03:38.872 ******* 2025-02-10 09:04:42.967971 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:42.968435 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:42.968466 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:42.968482 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:42.971968 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:42.973401 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:42.975092 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:42.975654 | orchestrator | 2025-02-10 09:04:42.976461 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-02-10 09:04:42.976937 | orchestrator | Monday 10 February 2025 09:04:42 +0000 (0:00:08.485) 0:03:47.357 ******* 2025-02-10 09:04:44.331472 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:44.331789 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:44.331823 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:44.331863 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:44.332124 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:44.332810 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:44.333554 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:44.335186 | orchestrator | 2025-02-10 09:04:44.339173 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-02-10 09:04:44.339242 | orchestrator | Monday 10 February 2025 09:04:44 +0000 (0:00:01.360) 0:03:48.718 ******* 2025-02-10 09:04:45.699211 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:45.701954 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:45.702011 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:45.702354 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:45.703307 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:45.704106 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:45.707354 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:45.707885 | orchestrator | 2025-02-10 09:04:45.708257 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-02-10 09:04:45.709220 | orchestrator | Monday 10 February 2025 09:04:45 +0000 (0:00:01.370) 0:03:50.088 ******* 2025-02-10 09:04:46.192708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:04:46.194362 | orchestrator | 2025-02-10 09:04:46.194743 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-02-10 09:04:46.195206 | orchestrator | Monday 10 February 2025 09:04:46 +0000 (0:00:00.494) 0:03:50.583 ******* 2025-02-10 09:04:54.994308 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:54.994567 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:54.994598 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:54.995500 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:54.999134 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:55.001712 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:55.001768 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:55.001789 | orchestrator | 2025-02-10 09:04:55.002341 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-02-10 09:04:55.002958 | orchestrator | Monday 10 February 2025 09:04:54 +0000 (0:00:08.798) 0:03:59.382 ******* 2025-02-10 09:04:55.611770 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:55.612012 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:55.612859 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:55.613693 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:55.614873 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:55.616066 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:55.616193 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:55.616853 | orchestrator | 2025-02-10 09:04:55.617519 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-02-10 09:04:55.618146 | orchestrator | Monday 10 February 2025 09:04:55 +0000 (0:00:00.621) 0:04:00.004 ******* 2025-02-10 09:04:56.832968 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:56.833572 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:56.834144 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:56.834188 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:56.838253 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:57.948863 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:57.949012 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:57.949033 | orchestrator | 2025-02-10 09:04:57.949050 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-02-10 09:04:57.949067 | orchestrator | Monday 10 February 2025 09:04:56 +0000 (0:00:01.220) 0:04:01.224 ******* 2025-02-10 09:04:57.949100 | orchestrator | changed: [testbed-manager] 2025-02-10 09:04:57.949168 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:04:57.951909 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:04:57.952645 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:04:57.952688 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:04:57.952715 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:04:57.952748 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:04:57.953493 | orchestrator | 2025-02-10 09:04:57.953961 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-02-10 09:04:57.954576 | orchestrator | Monday 10 February 2025 09:04:57 +0000 (0:00:01.115) 0:04:02.339 ******* 2025-02-10 09:04:58.070298 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:58.116501 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:58.157441 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:58.194911 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:58.265482 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:58.265755 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:58.266545 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:58.266860 | orchestrator | 2025-02-10 09:04:58.267323 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-02-10 09:04:58.267738 | orchestrator | Monday 10 February 2025 09:04:58 +0000 (0:00:00.317) 0:04:02.657 ******* 2025-02-10 09:04:58.370775 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:58.412904 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:58.448886 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:58.495914 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:58.597279 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:58.597959 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:58.598003 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:58.598541 | orchestrator | 2025-02-10 09:04:58.599769 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-02-10 09:04:58.600510 | orchestrator | Monday 10 February 2025 09:04:58 +0000 (0:00:00.331) 0:04:02.988 ******* 2025-02-10 09:04:58.709170 | orchestrator | ok: [testbed-manager] 2025-02-10 09:04:58.747955 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:04:58.787556 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:04:58.831191 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:04:58.913317 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:04:58.914686 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:04:58.914790 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:04:58.916513 | orchestrator | 2025-02-10 09:04:58.917145 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-02-10 09:04:58.917890 | orchestrator | Monday 10 February 2025 09:04:58 +0000 (0:00:00.317) 0:04:03.306 ******* 2025-02-10 09:05:04.335523 | orchestrator | ok: [testbed-manager] 2025-02-10 09:05:04.337305 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:05:04.338436 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:05:04.339043 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:05:04.340055 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:05:04.340870 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:05:04.341729 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:05:04.342218 | orchestrator | 2025-02-10 09:05:04.342949 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-02-10 09:05:04.343482 | orchestrator | Monday 10 February 2025 09:05:04 +0000 (0:00:05.419) 0:04:08.726 ******* 2025-02-10 09:05:04.889262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:05:04.892357 | orchestrator | 2025-02-10 09:05:04.892666 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-02-10 09:05:04.893214 | orchestrator | Monday 10 February 2025 09:05:04 +0000 (0:00:00.554) 0:04:09.280 ******* 2025-02-10 09:05:04.965862 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.023856 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-02-10 09:05:05.023992 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.024030 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:05.024956 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-02-10 09:05:05.025549 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.064224 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-02-10 09:05:05.064427 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:05:05.121756 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:05:05.121989 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.122779 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-02-10 09:05:05.123701 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.124629 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-02-10 09:05:05.164564 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:05:05.253725 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:05:05.254193 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.255490 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-02-10 09:05:05.255776 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:05:05.256628 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-02-10 09:05:05.257245 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-02-10 09:05:05.258083 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:05:05.258924 | orchestrator | 2025-02-10 09:05:05.259412 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-02-10 09:05:05.260124 | orchestrator | Monday 10 February 2025 09:05:05 +0000 (0:00:00.365) 0:04:09.646 ******* 2025-02-10 09:05:05.862681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:05:05.864726 | orchestrator | 2025-02-10 09:05:05.868309 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-02-10 09:05:05.871270 | orchestrator | Monday 10 February 2025 09:05:05 +0000 (0:00:00.605) 0:04:10.252 ******* 2025-02-10 09:05:05.939078 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-02-10 09:05:05.940152 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-02-10 09:05:05.983021 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:05:05.983180 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-02-10 09:05:06.033629 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:05:06.033966 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-02-10 09:05:06.074339 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:05:06.122680 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:05:06.202181 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-02-10 09:05:06.202300 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-02-10 09:05:06.202338 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:05:06.203194 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:05:06.204909 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-02-10 09:05:06.205299 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:05:06.206768 | orchestrator | 2025-02-10 09:05:06.207575 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-02-10 09:05:06.208014 | orchestrator | Monday 10 February 2025 09:05:06 +0000 (0:00:00.343) 0:04:10.595 ******* 2025-02-10 09:05:06.685241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:05:06.685462 | orchestrator | 2025-02-10 09:05:06.685487 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-02-10 09:05:06.685573 | orchestrator | Monday 10 February 2025 09:05:06 +0000 (0:00:00.482) 0:04:11.078 ******* 2025-02-10 09:05:41.930622 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:05:41.931526 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:05:41.931558 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:05:41.931575 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:05:41.931591 | orchestrator | changed: [testbed-manager] 2025-02-10 09:05:41.931614 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:05:41.935677 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:05:41.935725 | orchestrator | 2025-02-10 09:05:41.935748 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-02-10 09:05:50.092054 | orchestrator | Monday 10 February 2025 09:05:41 +0000 (0:00:35.239) 0:04:46.317 ******* 2025-02-10 09:05:50.092305 | orchestrator | changed: [testbed-manager] 2025-02-10 09:05:50.092464 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:05:50.093653 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:05:50.094631 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:05:50.095514 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:05:50.095967 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:05:50.097216 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:05:50.097592 | orchestrator | 2025-02-10 09:05:50.098560 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-02-10 09:05:50.099086 | orchestrator | Monday 10 February 2025 09:05:50 +0000 (0:00:08.163) 0:04:54.481 ******* 2025-02-10 09:05:58.199944 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:05:58.200176 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:05:58.200201 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:05:58.200216 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:05:58.200236 | orchestrator | changed: [testbed-manager] 2025-02-10 09:05:58.202469 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:05:58.202706 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:05:58.206101 | orchestrator | 2025-02-10 09:05:58.207032 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-02-10 09:05:58.207178 | orchestrator | Monday 10 February 2025 09:05:58 +0000 (0:00:08.107) 0:05:02.589 ******* 2025-02-10 09:05:59.999914 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:00.001890 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:00.002781 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:00.002854 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:00.004062 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:00.004822 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:00.005730 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:00.006599 | orchestrator | 2025-02-10 09:06:00.007609 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-02-10 09:06:00.008010 | orchestrator | Monday 10 February 2025 09:05:59 +0000 (0:00:01.800) 0:05:04.390 ******* 2025-02-10 09:06:05.939849 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:05.941664 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:05.941704 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:05.942209 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:05.944053 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:05.944686 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:05.945551 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:05.946592 | orchestrator | 2025-02-10 09:06:05.947642 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-02-10 09:06:05.949244 | orchestrator | Monday 10 February 2025 09:06:05 +0000 (0:00:05.940) 0:05:10.330 ******* 2025-02-10 09:06:06.394858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:06:06.395806 | orchestrator | 2025-02-10 09:06:06.396985 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-02-10 09:06:06.398068 | orchestrator | Monday 10 February 2025 09:06:06 +0000 (0:00:00.455) 0:05:10.786 ******* 2025-02-10 09:06:07.172213 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:07.172548 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:07.173576 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:07.174666 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:07.175600 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:07.176550 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:07.177350 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:07.178135 | orchestrator | 2025-02-10 09:06:07.179445 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-02-10 09:06:07.180269 | orchestrator | Monday 10 February 2025 09:06:07 +0000 (0:00:00.776) 0:05:11.562 ******* 2025-02-10 09:06:08.931059 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:08.932487 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:08.934286 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:08.935056 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:08.935881 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:08.937259 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:08.938154 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:08.939604 | orchestrator | 2025-02-10 09:06:08.940441 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-02-10 09:06:08.941068 | orchestrator | Monday 10 February 2025 09:06:08 +0000 (0:00:01.759) 0:05:13.322 ******* 2025-02-10 09:06:09.778737 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:09.778937 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:09.778968 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:09.780731 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:09.781532 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:09.782096 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:09.782931 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:09.784599 | orchestrator | 2025-02-10 09:06:09.784776 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-02-10 09:06:09.786138 | orchestrator | Monday 10 February 2025 09:06:09 +0000 (0:00:00.846) 0:05:14.168 ******* 2025-02-10 09:06:09.878135 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:09.911428 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:09.959142 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:09.996215 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:10.079873 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:10.080217 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:10.081267 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:10.085273 | orchestrator | 2025-02-10 09:06:10.166766 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-02-10 09:06:10.166924 | orchestrator | Monday 10 February 2025 09:06:10 +0000 (0:00:00.303) 0:05:14.471 ******* 2025-02-10 09:06:10.166965 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:10.199988 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:10.233689 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:10.268978 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:10.306909 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:10.543440 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:10.544171 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:10.545085 | orchestrator | 2025-02-10 09:06:10.547749 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-02-10 09:06:10.658953 | orchestrator | Monday 10 February 2025 09:06:10 +0000 (0:00:00.464) 0:05:14.936 ******* 2025-02-10 09:06:10.659114 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:10.693894 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:10.737637 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:10.770671 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:10.848941 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:10.849080 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:10.850305 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:10.850761 | orchestrator | 2025-02-10 09:06:10.850783 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-02-10 09:06:10.851508 | orchestrator | Monday 10 February 2025 09:06:10 +0000 (0:00:00.304) 0:05:15.241 ******* 2025-02-10 09:06:10.949099 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:11.001596 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:11.154126 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:11.195140 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:11.283177 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:11.283543 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:11.284315 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:11.287609 | orchestrator | 2025-02-10 09:06:11.398541 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-02-10 09:06:11.398680 | orchestrator | Monday 10 February 2025 09:06:11 +0000 (0:00:00.434) 0:05:15.675 ******* 2025-02-10 09:06:11.398718 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:11.432483 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:11.471501 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:11.522501 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:11.606351 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:11.606980 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:11.607178 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:11.607774 | orchestrator | 2025-02-10 09:06:11.608660 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-02-10 09:06:11.609118 | orchestrator | Monday 10 February 2025 09:06:11 +0000 (0:00:00.323) 0:05:15.998 ******* 2025-02-10 09:06:11.727714 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:11.767047 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:11.816738 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:11.854826 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:11.918643 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:11.918849 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:11.919238 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:11.919265 | orchestrator | 2025-02-10 09:06:11.919286 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-02-10 09:06:11.920031 | orchestrator | Monday 10 February 2025 09:06:11 +0000 (0:00:00.311) 0:05:16.310 ******* 2025-02-10 09:06:11.994795 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:12.026758 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:12.066520 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:12.095333 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:12.127732 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:12.196278 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:12.196518 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:12.197916 | orchestrator | 2025-02-10 09:06:12.198731 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-02-10 09:06:12.199709 | orchestrator | Monday 10 February 2025 09:06:12 +0000 (0:00:00.279) 0:05:16.589 ******* 2025-02-10 09:06:12.700336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:06:12.701043 | orchestrator | 2025-02-10 09:06:12.701769 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-02-10 09:06:12.706290 | orchestrator | Monday 10 February 2025 09:06:12 +0000 (0:00:00.503) 0:05:17.093 ******* 2025-02-10 09:06:13.745345 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:13.746877 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:13.746946 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:13.747037 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:13.748454 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:13.748898 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:13.750249 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:13.750398 | orchestrator | 2025-02-10 09:06:13.751322 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-02-10 09:06:13.752032 | orchestrator | Monday 10 February 2025 09:06:13 +0000 (0:00:01.042) 0:05:18.135 ******* 2025-02-10 09:06:16.661879 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:06:16.662210 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:06:16.663862 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:06:16.663968 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:06:16.666690 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:06:16.667481 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:06:16.668349 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:16.668902 | orchestrator | 2025-02-10 09:06:16.669468 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-02-10 09:06:16.669966 | orchestrator | Monday 10 February 2025 09:06:16 +0000 (0:00:02.916) 0:05:21.052 ******* 2025-02-10 09:06:16.738912 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-02-10 09:06:16.821923 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-02-10 09:06:16.823205 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-02-10 09:06:16.826946 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-02-10 09:06:16.828831 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-02-10 09:06:17.056511 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-02-10 09:06:17.056679 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:06:17.056791 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-02-10 09:06:17.057453 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-02-10 09:06:17.057721 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-02-10 09:06:17.130658 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:17.130922 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-02-10 09:06:17.130943 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-02-10 09:06:17.222322 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:17.223522 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-02-10 09:06:17.223775 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-02-10 09:06:17.224097 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-02-10 09:06:17.224688 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-02-10 09:06:17.314456 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:17.314648 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-02-10 09:06:17.314679 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-02-10 09:06:17.314842 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-02-10 09:06:17.453891 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:17.454323 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:17.455765 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-02-10 09:06:17.460607 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-02-10 09:06:17.460975 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-02-10 09:06:17.461523 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:17.461823 | orchestrator | 2025-02-10 09:06:17.462452 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-02-10 09:06:17.462922 | orchestrator | Monday 10 February 2025 09:06:17 +0000 (0:00:00.793) 0:05:21.845 ******* 2025-02-10 09:06:24.140188 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:24.140460 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:24.142588 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:24.142627 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:24.143386 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:24.143503 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:24.145847 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:24.146307 | orchestrator | 2025-02-10 09:06:24.146908 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-02-10 09:06:24.149324 | orchestrator | Monday 10 February 2025 09:06:24 +0000 (0:00:06.684) 0:05:28.530 ******* 2025-02-10 09:06:25.257250 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:25.259065 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:25.259115 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:25.259571 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:25.260292 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:25.260977 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:25.261531 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:25.262453 | orchestrator | 2025-02-10 09:06:25.263069 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-02-10 09:06:25.263439 | orchestrator | Monday 10 February 2025 09:06:25 +0000 (0:00:01.118) 0:05:29.648 ******* 2025-02-10 09:06:33.110905 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:33.113059 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:33.113104 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:33.115113 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:33.115143 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:33.115158 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:33.115178 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:33.116414 | orchestrator | 2025-02-10 09:06:33.117272 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-02-10 09:06:33.118630 | orchestrator | Monday 10 February 2025 09:06:33 +0000 (0:00:07.851) 0:05:37.499 ******* 2025-02-10 09:06:36.316149 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:36.317583 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:36.317641 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:36.318414 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:36.318541 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:36.320764 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:36.322001 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:36.323124 | orchestrator | 2025-02-10 09:06:36.324045 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-02-10 09:06:36.325124 | orchestrator | Monday 10 February 2025 09:06:36 +0000 (0:00:03.205) 0:05:40.704 ******* 2025-02-10 09:06:37.893802 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:37.894606 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:37.894637 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:37.894659 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:37.894971 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:37.895884 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:37.897781 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:37.898085 | orchestrator | 2025-02-10 09:06:37.898525 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-02-10 09:06:39.270242 | orchestrator | Monday 10 February 2025 09:06:37 +0000 (0:00:01.580) 0:05:42.285 ******* 2025-02-10 09:06:39.270502 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:39.271545 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:39.271583 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:39.271729 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:39.272460 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:39.272885 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:39.273507 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:39.274522 | orchestrator | 2025-02-10 09:06:39.274592 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-02-10 09:06:39.275017 | orchestrator | Monday 10 February 2025 09:06:39 +0000 (0:00:01.374) 0:05:43.659 ******* 2025-02-10 09:06:39.484326 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:06:39.550955 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:06:39.625748 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:06:39.696518 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:06:39.928890 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:06:39.929141 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:06:39.930142 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:39.931292 | orchestrator | 2025-02-10 09:06:39.932309 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-02-10 09:06:39.934588 | orchestrator | Monday 10 February 2025 09:06:39 +0000 (0:00:00.660) 0:05:44.319 ******* 2025-02-10 09:06:50.166941 | orchestrator | ok: [testbed-manager] 2025-02-10 09:06:50.167165 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:50.167194 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:50.167209 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:50.167282 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:50.167299 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:50.167324 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:50.167437 | orchestrator | 2025-02-10 09:06:50.167465 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-02-10 09:06:50.167522 | orchestrator | Monday 10 February 2025 09:06:50 +0000 (0:00:10.238) 0:05:54.558 ******* 2025-02-10 09:06:51.392026 | orchestrator | changed: [testbed-manager] 2025-02-10 09:06:51.392214 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:06:51.392245 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:06:51.394191 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:06:51.394873 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:06:51.395550 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:06:51.395814 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:06:51.396916 | orchestrator | 2025-02-10 09:06:51.397653 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-02-10 09:06:51.401129 | orchestrator | Monday 10 February 2025 09:06:51 +0000 (0:00:01.223) 0:05:55.781 ******* 2025-02-10 09:07:04.273026 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:18.133627 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:18.133776 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:18.133795 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:18.133809 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:18.133823 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:18.133836 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:18.133877 | orchestrator | 2025-02-10 09:07:18.133892 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-02-10 09:07:18.133907 | orchestrator | Monday 10 February 2025 09:07:04 +0000 (0:00:12.867) 0:06:08.649 ******* 2025-02-10 09:07:18.133938 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:18.136177 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:18.136222 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:18.137116 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:18.138800 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:18.139536 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:18.140275 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:18.141193 | orchestrator | 2025-02-10 09:07:18.141569 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-02-10 09:07:18.142548 | orchestrator | Monday 10 February 2025 09:07:18 +0000 (0:00:13.868) 0:06:22.517 ******* 2025-02-10 09:07:18.528089 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-02-10 09:07:19.495766 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-02-10 09:07:19.495993 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-02-10 09:07:19.497712 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-02-10 09:07:19.497967 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-02-10 09:07:19.502222 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-02-10 09:07:19.502298 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-02-10 09:07:19.502322 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-02-10 09:07:19.504792 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-02-10 09:07:19.505671 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-02-10 09:07:19.507018 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-02-10 09:07:19.507325 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-02-10 09:07:19.508474 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-02-10 09:07:19.508895 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-02-10 09:07:19.509878 | orchestrator | 2025-02-10 09:07:19.510446 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-02-10 09:07:19.511088 | orchestrator | Monday 10 February 2025 09:07:19 +0000 (0:00:01.366) 0:06:23.884 ******* 2025-02-10 09:07:19.642577 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:19.712868 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:19.792346 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:19.864882 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:19.929480 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:20.050441 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:20.051188 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:20.052571 | orchestrator | 2025-02-10 09:07:20.053722 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-02-10 09:07:20.054572 | orchestrator | Monday 10 February 2025 09:07:20 +0000 (0:00:00.556) 0:06:24.440 ******* 2025-02-10 09:07:24.108644 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:24.111373 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:24.111993 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:24.112040 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:24.112055 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:24.112077 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:24.112946 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:24.114181 | orchestrator | 2025-02-10 09:07:24.114798 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-02-10 09:07:24.115651 | orchestrator | Monday 10 February 2025 09:07:24 +0000 (0:00:04.056) 0:06:28.497 ******* 2025-02-10 09:07:24.243062 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:24.315492 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:24.381511 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:24.447555 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:24.527832 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:24.647486 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:24.648394 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:24.648439 | orchestrator | 2025-02-10 09:07:24.649744 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-02-10 09:07:24.650830 | orchestrator | Monday 10 February 2025 09:07:24 +0000 (0:00:00.539) 0:06:29.037 ******* 2025-02-10 09:07:24.718175 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-02-10 09:07:24.719529 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-02-10 09:07:24.795047 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:24.796009 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-02-10 09:07:24.796139 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-02-10 09:07:24.871101 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:24.871625 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-02-10 09:07:24.872041 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-02-10 09:07:24.972422 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:24.973412 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-02-10 09:07:24.974345 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-02-10 09:07:25.057736 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:25.058530 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-02-10 09:07:25.058855 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-02-10 09:07:25.147746 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:25.148457 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-02-10 09:07:25.150325 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-02-10 09:07:25.274952 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:25.276094 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-02-10 09:07:25.277467 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-02-10 09:07:25.278728 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:25.280257 | orchestrator | 2025-02-10 09:07:25.281725 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-02-10 09:07:25.284245 | orchestrator | Monday 10 February 2025 09:07:25 +0000 (0:00:00.628) 0:06:29.666 ******* 2025-02-10 09:07:25.413850 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:25.479088 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:25.551718 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:25.616838 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:25.682695 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:25.803975 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:25.804947 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:25.806174 | orchestrator | 2025-02-10 09:07:25.807146 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-02-10 09:07:25.807654 | orchestrator | Monday 10 February 2025 09:07:25 +0000 (0:00:00.530) 0:06:30.196 ******* 2025-02-10 09:07:25.937507 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:26.009585 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:26.076735 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:26.153658 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:26.218534 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:26.320124 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:26.321279 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:26.323435 | orchestrator | 2025-02-10 09:07:26.324495 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-02-10 09:07:26.324540 | orchestrator | Monday 10 February 2025 09:07:26 +0000 (0:00:00.515) 0:06:30.711 ******* 2025-02-10 09:07:26.482064 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:26.728619 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:26.798498 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:26.887058 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:26.997984 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:27.116891 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:27.117407 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:27.118592 | orchestrator | 2025-02-10 09:07:27.119342 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-02-10 09:07:27.119618 | orchestrator | Monday 10 February 2025 09:07:27 +0000 (0:00:00.796) 0:06:31.507 ******* 2025-02-10 09:07:34.002307 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:34.002466 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:34.002478 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:34.002487 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:34.002822 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:34.004789 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:34.006283 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:34.006523 | orchestrator | 2025-02-10 09:07:34.006535 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-02-10 09:07:34.006545 | orchestrator | Monday 10 February 2025 09:07:33 +0000 (0:00:06.883) 0:06:38.391 ******* 2025-02-10 09:07:35.022535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:07:35.022738 | orchestrator | 2025-02-10 09:07:35.022769 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-02-10 09:07:35.023672 | orchestrator | Monday 10 February 2025 09:07:35 +0000 (0:00:01.023) 0:06:39.414 ******* 2025-02-10 09:07:36.103824 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:36.104008 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:36.104875 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:36.105054 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:36.105321 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:36.106957 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:36.107599 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:36.109209 | orchestrator | 2025-02-10 09:07:36.109463 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-02-10 09:07:36.110502 | orchestrator | Monday 10 February 2025 09:07:36 +0000 (0:00:01.080) 0:06:40.494 ******* 2025-02-10 09:07:36.965213 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:36.967187 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:36.967241 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:36.972750 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:36.973480 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:36.974433 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:36.975385 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:36.977472 | orchestrator | 2025-02-10 09:07:38.417458 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-02-10 09:07:38.417613 | orchestrator | Monday 10 February 2025 09:07:36 +0000 (0:00:00.861) 0:06:41.355 ******* 2025-02-10 09:07:38.417687 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:38.417784 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:38.417805 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:38.417826 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:38.420450 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:38.420523 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:38.420828 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:38.421757 | orchestrator | 2025-02-10 09:07:38.422068 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-02-10 09:07:38.422798 | orchestrator | Monday 10 February 2025 09:07:38 +0000 (0:00:01.451) 0:06:42.807 ******* 2025-02-10 09:07:38.579241 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:39.920719 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:39.921019 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:39.921046 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:39.921061 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:39.921817 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:39.922195 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:39.922972 | orchestrator | 2025-02-10 09:07:39.923647 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-02-10 09:07:39.923669 | orchestrator | Monday 10 February 2025 09:07:39 +0000 (0:00:01.504) 0:06:44.311 ******* 2025-02-10 09:07:41.283905 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:41.284568 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:41.284681 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:41.284714 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:41.288113 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:41.288277 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:41.288571 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:41.289287 | orchestrator | 2025-02-10 09:07:41.290094 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-02-10 09:07:41.290930 | orchestrator | Monday 10 February 2025 09:07:41 +0000 (0:00:01.361) 0:06:45.673 ******* 2025-02-10 09:07:43.028584 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:43.030407 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:43.031449 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:43.032430 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:43.032474 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:43.035653 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:43.036037 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:43.036540 | orchestrator | 2025-02-10 09:07:43.037157 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-02-10 09:07:43.037563 | orchestrator | Monday 10 February 2025 09:07:43 +0000 (0:00:01.745) 0:06:47.418 ******* 2025-02-10 09:07:43.962758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:07:43.962967 | orchestrator | 2025-02-10 09:07:43.962997 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-02-10 09:07:43.963022 | orchestrator | Monday 10 February 2025 09:07:43 +0000 (0:00:00.937) 0:06:48.356 ******* 2025-02-10 09:07:45.218458 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:45.219033 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:45.219450 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:45.220015 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:45.220503 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:45.221641 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:45.223027 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:45.223606 | orchestrator | 2025-02-10 09:07:45.224598 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-02-10 09:07:45.224778 | orchestrator | Monday 10 February 2025 09:07:45 +0000 (0:00:01.254) 0:06:49.610 ******* 2025-02-10 09:07:46.254571 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:46.254747 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:46.255246 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:46.256671 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:46.257385 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:46.257434 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:46.258706 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:46.259611 | orchestrator | 2025-02-10 09:07:46.259975 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-02-10 09:07:46.260020 | orchestrator | Monday 10 February 2025 09:07:46 +0000 (0:00:01.036) 0:06:50.646 ******* 2025-02-10 09:07:47.432711 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:47.433483 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:47.433545 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:47.433584 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:47.433712 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:47.433811 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:47.434345 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:47.434624 | orchestrator | 2025-02-10 09:07:47.435320 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-02-10 09:07:47.435596 | orchestrator | Monday 10 February 2025 09:07:47 +0000 (0:00:01.176) 0:06:51.823 ******* 2025-02-10 09:07:48.500212 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:48.500923 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:07:48.502274 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:07:48.502329 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:07:48.502396 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:48.502413 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:48.502945 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:48.503029 | orchestrator | 2025-02-10 09:07:48.504123 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-02-10 09:07:48.504342 | orchestrator | Monday 10 February 2025 09:07:48 +0000 (0:00:01.069) 0:06:52.892 ******* 2025-02-10 09:07:49.893069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:07:49.895116 | orchestrator | 2025-02-10 09:07:49.895498 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.895528 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.912) 0:06:53.805 ******* 2025-02-10 09:07:49.895543 | orchestrator | 2025-02-10 09:07:49.895565 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.896749 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.039) 0:06:53.845 ******* 2025-02-10 09:07:49.897520 | orchestrator | 2025-02-10 09:07:49.898073 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.899298 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.048) 0:06:53.893 ******* 2025-02-10 09:07:49.899891 | orchestrator | 2025-02-10 09:07:49.900877 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.901657 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.039) 0:06:53.932 ******* 2025-02-10 09:07:49.901904 | orchestrator | 2025-02-10 09:07:49.902730 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.903467 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.039) 0:06:53.972 ******* 2025-02-10 09:07:49.903958 | orchestrator | 2025-02-10 09:07:49.904755 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.905408 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.046) 0:06:54.019 ******* 2025-02-10 09:07:49.906163 | orchestrator | 2025-02-10 09:07:49.906904 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-10 09:07:49.907246 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.219) 0:06:54.239 ******* 2025-02-10 09:07:49.907597 | orchestrator | 2025-02-10 09:07:49.908538 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-10 09:07:49.908926 | orchestrator | Monday 10 February 2025 09:07:49 +0000 (0:00:00.042) 0:06:54.281 ******* 2025-02-10 09:07:51.289103 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:07:51.289818 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:07:51.290874 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:07:51.291544 | orchestrator | 2025-02-10 09:07:51.292779 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-02-10 09:07:51.293071 | orchestrator | Monday 10 February 2025 09:07:51 +0000 (0:00:01.396) 0:06:55.677 ******* 2025-02-10 09:07:53.685662 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:53.685828 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:53.685852 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:53.686478 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:53.687485 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:53.688141 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:53.689759 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:53.690101 | orchestrator | 2025-02-10 09:07:53.690779 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-02-10 09:07:53.691976 | orchestrator | Monday 10 February 2025 09:07:53 +0000 (0:00:02.396) 0:06:58.074 ******* 2025-02-10 09:07:54.826240 | orchestrator | changed: [testbed-manager] 2025-02-10 09:07:54.826596 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:54.827713 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:54.827957 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:54.829405 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:54.829602 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:54.829645 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:54.829725 | orchestrator | 2025-02-10 09:07:54.830417 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-02-10 09:07:54.831040 | orchestrator | Monday 10 February 2025 09:07:54 +0000 (0:00:01.142) 0:06:59.216 ******* 2025-02-10 09:07:54.971835 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:57.413656 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:57.416105 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:57.416781 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:57.417389 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:57.419928 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:57.421120 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:57.421996 | orchestrator | 2025-02-10 09:07:57.423040 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-02-10 09:07:57.423630 | orchestrator | Monday 10 February 2025 09:07:57 +0000 (0:00:02.584) 0:07:01.801 ******* 2025-02-10 09:07:57.523331 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:57.523518 | orchestrator | 2025-02-10 09:07:57.524627 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-02-10 09:07:57.525125 | orchestrator | Monday 10 February 2025 09:07:57 +0000 (0:00:00.114) 0:07:01.916 ******* 2025-02-10 09:07:58.799727 | orchestrator | ok: [testbed-manager] 2025-02-10 09:07:58.799882 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:07:58.799902 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:07:58.802907 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:07:58.803148 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:07:58.803169 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:07:58.803181 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:07:58.805093 | orchestrator | 2025-02-10 09:07:58.805154 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-02-10 09:07:58.805518 | orchestrator | Monday 10 February 2025 09:07:58 +0000 (0:00:01.272) 0:07:03.188 ******* 2025-02-10 09:07:58.955475 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:07:59.023153 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:07:59.104712 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:07:59.174266 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:07:59.240693 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:07:59.401436 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:07:59.401631 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:07:59.402939 | orchestrator | 2025-02-10 09:07:59.403194 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-02-10 09:07:59.404235 | orchestrator | Monday 10 February 2025 09:07:59 +0000 (0:00:00.604) 0:07:03.793 ******* 2025-02-10 09:08:00.333325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:08:00.333674 | orchestrator | 2025-02-10 09:08:00.334609 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-02-10 09:08:00.337862 | orchestrator | Monday 10 February 2025 09:08:00 +0000 (0:00:00.929) 0:07:04.723 ******* 2025-02-10 09:08:01.225959 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:01.226553 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:01.227507 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:01.228920 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:01.230576 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:01.231480 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:01.232804 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:01.234559 | orchestrator | 2025-02-10 09:08:01.235607 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-02-10 09:08:01.236566 | orchestrator | Monday 10 February 2025 09:08:01 +0000 (0:00:00.893) 0:07:05.617 ******* 2025-02-10 09:08:04.088663 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-02-10 09:08:04.089400 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-02-10 09:08:04.091194 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-02-10 09:08:04.092709 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-02-10 09:08:04.093701 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-02-10 09:08:04.095924 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-02-10 09:08:04.096234 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-02-10 09:08:04.097261 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-02-10 09:08:04.097993 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-02-10 09:08:04.098761 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-02-10 09:08:04.099567 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-02-10 09:08:04.100805 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-02-10 09:08:04.101545 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-02-10 09:08:04.102612 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-02-10 09:08:04.102965 | orchestrator | 2025-02-10 09:08:04.103589 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-02-10 09:08:04.103962 | orchestrator | Monday 10 February 2025 09:08:04 +0000 (0:00:02.859) 0:07:08.477 ******* 2025-02-10 09:08:04.232165 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:04.296987 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:04.368718 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:04.433799 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:04.497740 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:04.620533 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:04.620825 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:04.624521 | orchestrator | 2025-02-10 09:08:05.523952 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-02-10 09:08:05.524096 | orchestrator | Monday 10 February 2025 09:08:04 +0000 (0:00:00.534) 0:07:09.012 ******* 2025-02-10 09:08:05.524141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:08:05.524485 | orchestrator | 2025-02-10 09:08:05.524706 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-02-10 09:08:05.525442 | orchestrator | Monday 10 February 2025 09:08:05 +0000 (0:00:00.900) 0:07:09.912 ******* 2025-02-10 09:08:05.972597 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:06.775893 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:06.776519 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:06.776569 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:06.777400 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:06.778151 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:06.780269 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:06.781326 | orchestrator | 2025-02-10 09:08:06.781398 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-02-10 09:08:06.781425 | orchestrator | Monday 10 February 2025 09:08:06 +0000 (0:00:01.253) 0:07:11.166 ******* 2025-02-10 09:08:07.176774 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:07.710193 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:07.710993 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:07.711715 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:07.711889 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:07.711919 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:07.712774 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:07.713039 | orchestrator | 2025-02-10 09:08:07.714408 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-02-10 09:08:07.714442 | orchestrator | Monday 10 February 2025 09:08:07 +0000 (0:00:00.933) 0:07:12.100 ******* 2025-02-10 09:08:07.868166 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:07.941151 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:08.016825 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:08.097311 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:08.177327 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:08.279897 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:08.280597 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:08.280923 | orchestrator | 2025-02-10 09:08:08.281472 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-02-10 09:08:08.282065 | orchestrator | Monday 10 February 2025 09:08:08 +0000 (0:00:00.570) 0:07:12.670 ******* 2025-02-10 09:08:09.824802 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:09.824992 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:09.825859 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:09.826115 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:09.827095 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:09.828121 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:09.828948 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:09.829344 | orchestrator | 2025-02-10 09:08:09.830127 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-02-10 09:08:09.830719 | orchestrator | Monday 10 February 2025 09:08:09 +0000 (0:00:01.545) 0:07:14.216 ******* 2025-02-10 09:08:09.972504 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:10.054710 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:10.131550 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:10.218190 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:10.640795 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:10.764812 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:10.765308 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:10.766373 | orchestrator | 2025-02-10 09:08:10.769413 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-02-10 09:08:12.996673 | orchestrator | Monday 10 February 2025 09:08:10 +0000 (0:00:00.936) 0:07:15.152 ******* 2025-02-10 09:08:12.996923 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:12.998097 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:12.998146 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:12.998275 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:12.999102 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:12.999535 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:13.000316 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:13.002224 | orchestrator | 2025-02-10 09:08:13.003654 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-02-10 09:08:13.004264 | orchestrator | Monday 10 February 2025 09:08:12 +0000 (0:00:02.232) 0:07:17.384 ******* 2025-02-10 09:08:14.391231 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:14.391495 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:14.391801 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:14.392908 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:14.394130 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:14.394661 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:14.396865 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:14.398152 | orchestrator | 2025-02-10 09:08:14.398870 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-02-10 09:08:14.399568 | orchestrator | Monday 10 February 2025 09:08:14 +0000 (0:00:01.396) 0:07:18.781 ******* 2025-02-10 09:08:16.311264 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:16.311891 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:16.311935 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:16.312556 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:16.312987 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:16.315261 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:16.315620 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:16.316550 | orchestrator | 2025-02-10 09:08:16.316940 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-02-10 09:08:16.317953 | orchestrator | Monday 10 February 2025 09:08:16 +0000 (0:00:01.919) 0:07:20.700 ******* 2025-02-10 09:08:18.402627 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:18.403066 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:18.403098 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:18.403110 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:18.403127 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:18.403270 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:18.403287 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:18.403298 | orchestrator | 2025-02-10 09:08:18.403313 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:08:18.403820 | orchestrator | Monday 10 February 2025 09:08:18 +0000 (0:00:02.090) 0:07:22.790 ******* 2025-02-10 09:08:18.821923 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:19.275938 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:19.277219 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:19.277677 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:19.278701 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:19.280835 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:19.281229 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:19.281984 | orchestrator | 2025-02-10 09:08:19.282636 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:08:19.283137 | orchestrator | Monday 10 February 2025 09:08:19 +0000 (0:00:00.875) 0:07:23.666 ******* 2025-02-10 09:08:19.420041 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:19.485799 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:19.552655 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:19.629036 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:19.704669 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:20.120109 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:20.120541 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:20.121294 | orchestrator | 2025-02-10 09:08:20.122071 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-02-10 09:08:20.126175 | orchestrator | Monday 10 February 2025 09:08:20 +0000 (0:00:00.844) 0:07:24.511 ******* 2025-02-10 09:08:20.248530 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:20.326325 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:20.393060 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:20.463973 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:20.536622 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:20.650764 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:20.650964 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:20.651925 | orchestrator | 2025-02-10 09:08:20.652007 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-02-10 09:08:20.653165 | orchestrator | Monday 10 February 2025 09:08:20 +0000 (0:00:00.528) 0:07:25.039 ******* 2025-02-10 09:08:21.007014 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:21.078871 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:21.147003 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:21.221832 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:21.290178 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:21.391095 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:21.392269 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:21.396576 | orchestrator | 2025-02-10 09:08:21.399873 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-02-10 09:08:21.545033 | orchestrator | Monday 10 February 2025 09:08:21 +0000 (0:00:00.741) 0:07:25.781 ******* 2025-02-10 09:08:21.545195 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:21.607696 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:21.678433 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:21.755541 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:21.822683 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:21.954395 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:21.957053 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:21.959570 | orchestrator | 2025-02-10 09:08:21.960156 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-02-10 09:08:21.962110 | orchestrator | Monday 10 February 2025 09:08:21 +0000 (0:00:00.565) 0:07:26.347 ******* 2025-02-10 09:08:22.118817 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:22.190276 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:22.262790 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:22.340969 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:22.420106 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:22.530720 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:22.532751 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:22.535374 | orchestrator | 2025-02-10 09:08:22.536426 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-02-10 09:08:22.536818 | orchestrator | Monday 10 February 2025 09:08:22 +0000 (0:00:00.573) 0:07:26.920 ******* 2025-02-10 09:08:27.629759 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:27.630756 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:27.630797 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:27.631568 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:27.632318 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:27.633120 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:27.633314 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:27.634684 | orchestrator | 2025-02-10 09:08:27.637081 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-02-10 09:08:27.775275 | orchestrator | Monday 10 February 2025 09:08:27 +0000 (0:00:05.099) 0:07:32.020 ******* 2025-02-10 09:08:27.775526 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:27.838470 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:28.120671 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:28.190222 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:28.263383 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:28.395007 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:28.395316 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:28.396735 | orchestrator | 2025-02-10 09:08:28.397641 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-02-10 09:08:28.399487 | orchestrator | Monday 10 February 2025 09:08:28 +0000 (0:00:00.765) 0:07:32.785 ******* 2025-02-10 09:08:29.256877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:08:29.257087 | orchestrator | 2025-02-10 09:08:29.257690 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-02-10 09:08:29.258242 | orchestrator | Monday 10 February 2025 09:08:29 +0000 (0:00:00.860) 0:07:33.646 ******* 2025-02-10 09:08:31.057140 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:31.058007 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:31.058181 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:31.058641 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:31.059238 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:31.060717 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:31.061243 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:31.061819 | orchestrator | 2025-02-10 09:08:31.062222 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-02-10 09:08:31.062781 | orchestrator | Monday 10 February 2025 09:08:31 +0000 (0:00:01.801) 0:07:35.447 ******* 2025-02-10 09:08:32.234110 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:32.234314 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:32.234403 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:32.234467 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:32.235052 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:32.235758 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:32.236735 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:32.237127 | orchestrator | 2025-02-10 09:08:32.237156 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-02-10 09:08:32.237180 | orchestrator | Monday 10 February 2025 09:08:32 +0000 (0:00:01.174) 0:07:36.622 ******* 2025-02-10 09:08:32.764665 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:32.848088 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:33.407010 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:33.407638 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:33.408665 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:33.409084 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:33.410895 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:33.411981 | orchestrator | 2025-02-10 09:08:33.412012 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-02-10 09:08:33.412570 | orchestrator | Monday 10 February 2025 09:08:33 +0000 (0:00:01.174) 0:07:37.796 ******* 2025-02-10 09:08:35.418404 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.424526 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.424668 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.426189 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.427150 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.428093 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.428417 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-10 09:08:35.429720 | orchestrator | 2025-02-10 09:08:35.434098 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-02-10 09:08:35.434449 | orchestrator | Monday 10 February 2025 09:08:35 +0000 (0:00:02.012) 0:07:39.809 ******* 2025-02-10 09:08:36.496190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:08:36.496519 | orchestrator | 2025-02-10 09:08:36.497127 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-02-10 09:08:36.501504 | orchestrator | Monday 10 February 2025 09:08:36 +0000 (0:00:01.077) 0:07:40.887 ******* 2025-02-10 09:08:46.190466 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:46.190939 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:46.190994 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:46.192090 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:46.193791 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:46.195377 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:46.196319 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:46.197758 | orchestrator | 2025-02-10 09:08:46.198457 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-02-10 09:08:46.199118 | orchestrator | Monday 10 February 2025 09:08:46 +0000 (0:00:09.691) 0:07:50.579 ******* 2025-02-10 09:08:48.095158 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:48.097377 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:48.097426 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:48.097450 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:48.097545 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:48.097565 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:48.097579 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:48.097611 | orchestrator | 2025-02-10 09:08:48.097633 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-02-10 09:08:49.681653 | orchestrator | Monday 10 February 2025 09:08:48 +0000 (0:00:01.904) 0:07:52.484 ******* 2025-02-10 09:08:49.681824 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:49.681913 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:49.681939 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:49.683211 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:49.684613 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:49.684726 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:49.685253 | orchestrator | 2025-02-10 09:08:49.689534 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-02-10 09:08:49.692164 | orchestrator | Monday 10 February 2025 09:08:49 +0000 (0:00:01.586) 0:07:54.070 ******* 2025-02-10 09:08:51.044505 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:51.045425 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:51.046007 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:51.046940 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:51.047633 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:51.049032 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:51.050563 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:51.051171 | orchestrator | 2025-02-10 09:08:51.051910 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-02-10 09:08:51.052586 | orchestrator | 2025-02-10 09:08:51.053758 | orchestrator | TASK [Include hardening role] ************************************************** 2025-02-10 09:08:51.268818 | orchestrator | Monday 10 February 2025 09:08:51 +0000 (0:00:01.365) 0:07:55.435 ******* 2025-02-10 09:08:51.268975 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:51.334448 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:51.401107 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:51.486849 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:51.555508 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:51.695577 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:51.696012 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:51.697240 | orchestrator | 2025-02-10 09:08:51.698228 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-02-10 09:08:51.699828 | orchestrator | 2025-02-10 09:08:51.700962 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-02-10 09:08:51.701960 | orchestrator | Monday 10 February 2025 09:08:51 +0000 (0:00:00.649) 0:07:56.085 ******* 2025-02-10 09:08:53.191109 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:53.191628 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:53.193449 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:53.195114 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:53.196952 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:53.197762 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:53.198700 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:53.200398 | orchestrator | 2025-02-10 09:08:53.200933 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-02-10 09:08:53.201633 | orchestrator | Monday 10 February 2025 09:08:53 +0000 (0:00:01.495) 0:07:57.580 ******* 2025-02-10 09:08:54.986801 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:54.987727 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:54.988424 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:54.988487 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:54.989009 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:54.990211 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:54.991178 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:54.991876 | orchestrator | 2025-02-10 09:08:54.991919 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-02-10 09:08:54.996013 | orchestrator | Monday 10 February 2025 09:08:54 +0000 (0:00:01.795) 0:07:59.376 ******* 2025-02-10 09:08:55.131048 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:08:55.196680 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:08:55.285523 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:08:55.353441 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:08:55.421429 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:08:55.839571 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:08:55.840561 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:08:55.840601 | orchestrator | 2025-02-10 09:08:55.841290 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-02-10 09:08:55.845791 | orchestrator | Monday 10 February 2025 09:08:55 +0000 (0:00:00.855) 0:08:00.231 ******* 2025-02-10 09:08:57.171243 | orchestrator | changed: [testbed-manager] 2025-02-10 09:08:57.172085 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:08:57.172132 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:08:57.173166 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:08:57.175139 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:08:57.178671 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:08:57.183181 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:08:57.183912 | orchestrator | 2025-02-10 09:08:57.184738 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-02-10 09:08:57.185660 | orchestrator | 2025-02-10 09:08:57.186090 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-02-10 09:08:57.186573 | orchestrator | Monday 10 February 2025 09:08:57 +0000 (0:00:01.330) 0:08:01.562 ******* 2025-02-10 09:08:58.199820 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:08:58.200120 | orchestrator | 2025-02-10 09:08:58.200641 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-10 09:08:58.200878 | orchestrator | Monday 10 February 2025 09:08:58 +0000 (0:00:01.026) 0:08:02.589 ******* 2025-02-10 09:08:59.068431 | orchestrator | ok: [testbed-manager] 2025-02-10 09:08:59.070180 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:08:59.070248 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:08:59.070302 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:08:59.071146 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:08:59.072242 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:08:59.073023 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:08:59.073554 | orchestrator | 2025-02-10 09:08:59.074722 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-10 09:08:59.075530 | orchestrator | Monday 10 February 2025 09:08:59 +0000 (0:00:00.870) 0:08:03.459 ******* 2025-02-10 09:09:00.238751 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:00.239601 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:00.239669 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:00.240509 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:00.241082 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:00.241868 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:00.242764 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:00.243977 | orchestrator | 2025-02-10 09:09:00.245170 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-02-10 09:09:00.245817 | orchestrator | Monday 10 February 2025 09:09:00 +0000 (0:00:01.169) 0:08:04.628 ******* 2025-02-10 09:09:01.307947 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:09:01.308193 | orchestrator | 2025-02-10 09:09:01.308225 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-10 09:09:01.308908 | orchestrator | Monday 10 February 2025 09:09:01 +0000 (0:00:01.069) 0:08:05.698 ******* 2025-02-10 09:09:01.744459 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:02.214388 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:02.215076 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:02.216244 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:02.217081 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:02.218375 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:02.218998 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:02.219951 | orchestrator | 2025-02-10 09:09:02.220779 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-10 09:09:02.221513 | orchestrator | Monday 10 February 2025 09:09:02 +0000 (0:00:00.905) 0:08:06.603 ******* 2025-02-10 09:09:02.668488 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:03.421206 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:03.421716 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:03.422315 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:03.422374 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:03.422689 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:03.423120 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:03.423150 | orchestrator | 2025-02-10 09:09:03.424723 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:09:03.424774 | orchestrator | 2025-02-10 09:09:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:09:03.425204 | orchestrator | 2025-02-10 09:09:03 | INFO  | Please wait and do not abort execution. 2025-02-10 09:09:03.425338 | orchestrator | testbed-manager : ok=161  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-02-10 09:09:03.425962 | orchestrator | testbed-node-0 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:09:03.426543 | orchestrator | testbed-node-1 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:09:03.427124 | orchestrator | testbed-node-2 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:09:03.427588 | orchestrator | testbed-node-3 : ok=168  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-10 09:09:03.427892 | orchestrator | testbed-node-4 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:09:03.429792 | orchestrator | testbed-node-5 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-10 09:09:03.430502 | orchestrator | 2025-02-10 09:09:03.430964 | orchestrator | 2025-02-10 09:09:03.431502 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:09:03.432585 | orchestrator | Monday 10 February 2025 09:09:03 +0000 (0:00:01.207) 0:08:07.811 ******* 2025-02-10 09:09:03.433121 | orchestrator | =============================================================================== 2025-02-10 09:09:03.433729 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.62s 2025-02-10 09:09:03.434470 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.60s 2025-02-10 09:09:03.434815 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.24s 2025-02-10 09:09:03.435035 | orchestrator | osism.services.docker : Install docker package ------------------------- 13.87s 2025-02-10 09:09:03.435498 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.35s 2025-02-10 09:09:03.435684 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.87s 2025-02-10 09:09:03.436093 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.54s 2025-02-10 09:09:03.436547 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.24s 2025-02-10 09:09:03.436917 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.69s 2025-02-10 09:09:03.437245 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.80s 2025-02-10 09:09:03.437444 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.49s 2025-02-10 09:09:03.437868 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.16s 2025-02-10 09:09:03.438553 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.11s 2025-02-10 09:09:03.438898 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.85s 2025-02-10 09:09:03.439104 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.95s 2025-02-10 09:09:03.439624 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.88s 2025-02-10 09:09:03.440054 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.68s 2025-02-10 09:09:03.440164 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 6.31s 2025-02-10 09:09:03.440704 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.94s 2025-02-10 09:09:03.441185 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.42s 2025-02-10 09:09:04.163313 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-02-10 09:09:05.717287 | orchestrator | + osism apply network 2025-02-10 09:09:05.717532 | orchestrator | 2025-02-10 09:09:05 | INFO  | Task 408a33d5-db24-4149-ad3a-8d98d5a266b2 (network) was prepared for execution. 2025-02-10 09:09:08.948717 | orchestrator | 2025-02-10 09:09:05 | INFO  | It takes a moment until task 408a33d5-db24-4149-ad3a-8d98d5a266b2 (network) has been started and output is visible here. 2025-02-10 09:09:08.948932 | orchestrator | 2025-02-10 09:09:08.949018 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-02-10 09:09:08.949041 | orchestrator | 2025-02-10 09:09:08.949548 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-02-10 09:09:08.950986 | orchestrator | Monday 10 February 2025 09:09:08 +0000 (0:00:00.218) 0:00:00.218 ******* 2025-02-10 09:09:09.033470 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-10 09:09:09.106084 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:09.185184 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:09.263311 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:09.340866 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:09.541258 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:09.686747 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:09.687714 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:09.690660 | orchestrator | 2025-02-10 09:09:10.944663 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-02-10 09:09:10.944816 | orchestrator | Monday 10 February 2025 09:09:09 +0000 (0:00:00.736) 0:00:00.954 ******* 2025-02-10 09:09:10.944848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:10.945593 | orchestrator | 2025-02-10 09:09:10.947278 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-02-10 09:09:10.948675 | orchestrator | Monday 10 February 2025 09:09:10 +0000 (0:00:01.256) 0:00:02.210 ******* 2025-02-10 09:09:13.047107 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:13.049672 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:13.050390 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:13.050419 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:13.050433 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:13.051377 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:13.052956 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:13.053289 | orchestrator | 2025-02-10 09:09:13.054643 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-02-10 09:09:13.055707 | orchestrator | Monday 10 February 2025 09:09:13 +0000 (0:00:02.105) 0:00:04.316 ******* 2025-02-10 09:09:14.766281 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:14.766575 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:14.767549 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:14.768058 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:14.768737 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:14.770828 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:14.770912 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:14.771482 | orchestrator | 2025-02-10 09:09:14.772239 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-02-10 09:09:14.772913 | orchestrator | Monday 10 February 2025 09:09:14 +0000 (0:00:01.721) 0:00:06.037 ******* 2025-02-10 09:09:15.550208 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-02-10 09:09:15.550397 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-02-10 09:09:15.550418 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-02-10 09:09:15.551926 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-02-10 09:09:15.552529 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-02-10 09:09:16.016901 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-02-10 09:09:16.017072 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-02-10 09:09:16.017106 | orchestrator | 2025-02-10 09:09:16.018177 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-02-10 09:09:17.948241 | orchestrator | Monday 10 February 2025 09:09:16 +0000 (0:00:01.244) 0:00:07.282 ******* 2025-02-10 09:09:17.948451 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:09:17.948762 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:09:17.951739 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:09:17.952985 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:09:17.953603 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:09:17.954544 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:09:17.955461 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:09:17.956296 | orchestrator | 2025-02-10 09:09:17.956588 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-02-10 09:09:17.957206 | orchestrator | Monday 10 February 2025 09:09:17 +0000 (0:00:01.932) 0:00:09.214 ******* 2025-02-10 09:09:19.655952 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:19.656238 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:19.656273 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:19.656661 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:19.656690 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:19.656710 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:19.658812 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:19.659191 | orchestrator | 2025-02-10 09:09:19.659222 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-02-10 09:09:19.659243 | orchestrator | Monday 10 February 2025 09:09:19 +0000 (0:00:01.707) 0:00:10.922 ******* 2025-02-10 09:09:20.150005 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:09:20.263679 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:09:20.696454 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:09:20.696632 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:09:20.698434 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:09:20.699019 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:09:20.699651 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:09:20.699926 | orchestrator | 2025-02-10 09:09:20.700692 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-02-10 09:09:20.703662 | orchestrator | Monday 10 February 2025 09:09:20 +0000 (0:00:01.046) 0:00:11.968 ******* 2025-02-10 09:09:21.192659 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:21.464868 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:21.916939 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:21.917151 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:21.918647 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:21.919252 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:21.920375 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:21.921046 | orchestrator | 2025-02-10 09:09:21.921954 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-02-10 09:09:21.923236 | orchestrator | Monday 10 February 2025 09:09:21 +0000 (0:00:01.218) 0:00:13.187 ******* 2025-02-10 09:09:22.082622 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:22.167939 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:22.251600 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:22.335195 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:22.579910 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:22.725045 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:22.725247 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:22.725963 | orchestrator | 2025-02-10 09:09:22.732440 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-02-10 09:09:24.811929 | orchestrator | Monday 10 February 2025 09:09:22 +0000 (0:00:00.804) 0:00:13.992 ******* 2025-02-10 09:09:24.812163 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:24.812274 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:24.812303 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:24.812333 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:24.813632 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:24.815552 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:24.816533 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:24.818481 | orchestrator | 2025-02-10 09:09:24.818973 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-02-10 09:09:24.819492 | orchestrator | Monday 10 February 2025 09:09:24 +0000 (0:00:02.087) 0:00:16.080 ******* 2025-02-10 09:09:26.794818 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.796189 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.796224 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.797197 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.797829 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-02-10 09:09:26.798640 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.799613 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.800541 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-10 09:09:26.801298 | orchestrator | 2025-02-10 09:09:26.801917 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-02-10 09:09:26.803878 | orchestrator | Monday 10 February 2025 09:09:26 +0000 (0:00:01.980) 0:00:18.061 ******* 2025-02-10 09:09:28.381663 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:28.381854 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:09:28.381989 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:09:28.384606 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:09:28.385680 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:09:28.386447 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:09:28.388695 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:09:28.389247 | orchestrator | 2025-02-10 09:09:28.390324 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-02-10 09:09:28.390948 | orchestrator | Monday 10 February 2025 09:09:28 +0000 (0:00:01.590) 0:00:19.651 ******* 2025-02-10 09:09:29.870884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:09:29.872630 | orchestrator | 2025-02-10 09:09:29.873538 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-02-10 09:09:29.874168 | orchestrator | Monday 10 February 2025 09:09:29 +0000 (0:00:01.486) 0:00:21.138 ******* 2025-02-10 09:09:30.455310 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:30.899795 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:30.900289 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:30.901074 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:30.901629 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:30.902206 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:30.902796 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:30.903622 | orchestrator | 2025-02-10 09:09:30.904552 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-02-10 09:09:30.904866 | orchestrator | Monday 10 February 2025 09:09:30 +0000 (0:00:01.028) 0:00:22.166 ******* 2025-02-10 09:09:31.076545 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:31.309297 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:09:31.397011 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:09:31.486772 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:09:31.570882 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:09:31.707195 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:09:31.707789 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:09:31.708814 | orchestrator | 2025-02-10 09:09:31.712567 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-02-10 09:09:32.170822 | orchestrator | Monday 10 February 2025 09:09:31 +0000 (0:00:00.808) 0:00:22.974 ******* 2025-02-10 09:09:32.171007 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.171533 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.171604 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.174949 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.730557 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.730890 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.730936 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.730963 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.732802 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.732906 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.735555 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.736056 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.736083 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-10 09:09:32.736098 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-02-10 09:09:32.736117 | orchestrator | 2025-02-10 09:09:32.736758 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-02-10 09:09:32.737501 | orchestrator | Monday 10 February 2025 09:09:32 +0000 (0:00:01.026) 0:00:24.001 ******* 2025-02-10 09:09:33.091463 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:33.179804 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:33.293285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:33.381576 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:33.487340 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:33.660832 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:33.661803 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:33.663761 | orchestrator | 2025-02-10 09:09:33.664689 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-02-10 09:09:33.666254 | orchestrator | Monday 10 February 2025 09:09:33 +0000 (0:00:00.931) 0:00:24.932 ******* 2025-02-10 09:09:33.830000 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:33.920965 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:34.004601 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:34.256892 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:34.352318 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:35.623056 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:35.624701 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:35.627547 | orchestrator | 2025-02-10 09:09:35.630014 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-02-10 09:09:35.630869 | orchestrator | Monday 10 February 2025 09:09:35 +0000 (0:00:01.957) 0:00:26.889 ******* 2025-02-10 09:09:35.793822 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:09:35.878204 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:09:35.961390 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:09:36.042586 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:09:36.128323 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:09:36.169054 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:09:36.169492 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:09:36.169987 | orchestrator | 2025-02-10 09:09:36.170820 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:09:36.171126 | orchestrator | 2025-02-10 09:09:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:09:36.171565 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.171622 | orchestrator | 2025-02-10 09:09:36 | INFO  | Please wait and do not abort execution. 2025-02-10 09:09:36.172005 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.172448 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.172787 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.173771 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.174548 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.174930 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-10 09:09:36.176073 | orchestrator | 2025-02-10 09:09:36.176538 | orchestrator | 2025-02-10 09:09:36.177340 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:09:36.177739 | orchestrator | Monday 10 February 2025 09:09:36 +0000 (0:00:00.550) 0:00:27.439 ******* 2025-02-10 09:09:36.178161 | orchestrator | =============================================================================== 2025-02-10 09:09:36.178917 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.11s 2025-02-10 09:09:36.179304 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-02-10 09:09:36.179779 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.98s 2025-02-10 09:09:36.180136 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 1.96s 2025-02-10 09:09:36.180605 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.93s 2025-02-10 09:09:36.181031 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2025-02-10 09:09:36.181390 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2025-02-10 09:09:36.181838 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.59s 2025-02-10 09:09:36.182127 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.49s 2025-02-10 09:09:36.182973 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2025-02-10 09:09:36.183503 | orchestrator | osism.commons.network : Create required directories --------------------- 1.24s 2025-02-10 09:09:36.183596 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.22s 2025-02-10 09:09:36.184042 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.05s 2025-02-10 09:09:36.184337 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2025-02-10 09:09:36.184695 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.03s 2025-02-10 09:09:36.184982 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 0.93s 2025-02-10 09:09:36.185435 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.81s 2025-02-10 09:09:36.185861 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.80s 2025-02-10 09:09:36.186154 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.74s 2025-02-10 09:09:36.187027 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.55s 2025-02-10 09:09:36.812532 | orchestrator | + osism apply wireguard 2025-02-10 09:09:38.283860 | orchestrator | 2025-02-10 09:09:38 | INFO  | Task 60fe11a1-0f54-4f90-afee-f54efb464cbc (wireguard) was prepared for execution. 2025-02-10 09:09:41.555484 | orchestrator | 2025-02-10 09:09:38 | INFO  | It takes a moment until task 60fe11a1-0f54-4f90-afee-f54efb464cbc (wireguard) has been started and output is visible here. 2025-02-10 09:09:41.555681 | orchestrator | 2025-02-10 09:09:41.555814 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-02-10 09:09:41.556568 | orchestrator | 2025-02-10 09:09:41.559149 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-02-10 09:09:41.562093 | orchestrator | Monday 10 February 2025 09:09:41 +0000 (0:00:00.174) 0:00:00.174 ******* 2025-02-10 09:09:43.050579 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:43.050868 | orchestrator | 2025-02-10 09:09:43.050923 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-02-10 09:09:49.736722 | orchestrator | Monday 10 February 2025 09:09:43 +0000 (0:00:01.500) 0:00:01.674 ******* 2025-02-10 09:09:49.736917 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:49.737982 | orchestrator | 2025-02-10 09:09:49.738008 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-02-10 09:09:49.738076 | orchestrator | Monday 10 February 2025 09:09:49 +0000 (0:00:06.686) 0:00:08.361 ******* 2025-02-10 09:09:50.315234 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:50.316052 | orchestrator | 2025-02-10 09:09:50.316829 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-02-10 09:09:50.317615 | orchestrator | Monday 10 February 2025 09:09:50 +0000 (0:00:00.579) 0:00:08.940 ******* 2025-02-10 09:09:50.768417 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:50.769078 | orchestrator | 2025-02-10 09:09:50.770431 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-02-10 09:09:50.771096 | orchestrator | Monday 10 February 2025 09:09:50 +0000 (0:00:00.453) 0:00:09.394 ******* 2025-02-10 09:09:51.462634 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:51.462811 | orchestrator | 2025-02-10 09:09:51.463657 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-02-10 09:09:51.464134 | orchestrator | Monday 10 February 2025 09:09:51 +0000 (0:00:00.694) 0:00:10.088 ******* 2025-02-10 09:09:51.893412 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:51.894598 | orchestrator | 2025-02-10 09:09:51.895661 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-02-10 09:09:51.895698 | orchestrator | Monday 10 February 2025 09:09:51 +0000 (0:00:00.428) 0:00:10.517 ******* 2025-02-10 09:09:52.315647 | orchestrator | ok: [testbed-manager] 2025-02-10 09:09:52.315844 | orchestrator | 2025-02-10 09:09:52.317171 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-02-10 09:09:52.318076 | orchestrator | Monday 10 February 2025 09:09:52 +0000 (0:00:00.423) 0:00:10.941 ******* 2025-02-10 09:09:53.519533 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:53.520443 | orchestrator | 2025-02-10 09:09:53.520497 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-02-10 09:09:53.521296 | orchestrator | Monday 10 February 2025 09:09:53 +0000 (0:00:01.201) 0:00:12.143 ******* 2025-02-10 09:09:54.424140 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-10 09:09:54.424476 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:54.425889 | orchestrator | 2025-02-10 09:09:54.426447 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-02-10 09:09:54.427614 | orchestrator | Monday 10 February 2025 09:09:54 +0000 (0:00:00.905) 0:00:13.048 ******* 2025-02-10 09:09:56.215521 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:56.217569 | orchestrator | 2025-02-10 09:09:56.217636 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-02-10 09:09:56.217672 | orchestrator | Monday 10 February 2025 09:09:56 +0000 (0:00:01.791) 0:00:14.840 ******* 2025-02-10 09:09:57.137206 | orchestrator | changed: [testbed-manager] 2025-02-10 09:09:57.137427 | orchestrator | 2025-02-10 09:09:57.137493 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:09:57.137517 | orchestrator | 2025-02-10 09:09:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:09:57.137599 | orchestrator | 2025-02-10 09:09:57 | INFO  | Please wait and do not abort execution. 2025-02-10 09:09:57.137622 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:09:57.138717 | orchestrator | 2025-02-10 09:09:57.138764 | orchestrator | 2025-02-10 09:09:57.139053 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:09:57.139084 | orchestrator | Monday 10 February 2025 09:09:57 +0000 (0:00:00.922) 0:00:15.762 ******* 2025-02-10 09:09:57.139374 | orchestrator | =============================================================================== 2025-02-10 09:09:57.140193 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.69s 2025-02-10 09:09:57.140415 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2025-02-10 09:09:57.140657 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.50s 2025-02-10 09:09:57.140967 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2025-02-10 09:09:57.141507 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-02-10 09:09:57.141998 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-02-10 09:09:57.142318 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2025-02-10 09:09:57.142717 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-02-10 09:09:57.142855 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-02-10 09:09:57.143096 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2025-02-10 09:09:57.144485 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-02-10 09:09:57.721690 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-02-10 09:09:57.757694 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-02-10 09:09:57.832847 | orchestrator | Dload Upload Total Spent Left Speed 2025-02-10 09:09:57.833018 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 185 0 --:--:-- --:--:-- --:--:-- 186 2025-02-10 09:09:57.845556 | orchestrator | + osism apply --environment custom workarounds 2025-02-10 09:09:59.230432 | orchestrator | 2025-02-10 09:09:59 | INFO  | Trying to run play workarounds in environment custom 2025-02-10 09:09:59.277485 | orchestrator | 2025-02-10 09:09:59 | INFO  | Task b9549cef-4247-4651-8ceb-f03a83f7fd40 (workarounds) was prepared for execution. 2025-02-10 09:10:02.599962 | orchestrator | 2025-02-10 09:09:59 | INFO  | It takes a moment until task b9549cef-4247-4651-8ceb-f03a83f7fd40 (workarounds) has been started and output is visible here. 2025-02-10 09:10:02.600162 | orchestrator | 2025-02-10 09:10:02.600233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:10:02.603665 | orchestrator | 2025-02-10 09:10:02.605333 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-02-10 09:10:02.606091 | orchestrator | Monday 10 February 2025 09:10:02 +0000 (0:00:00.180) 0:00:00.180 ******* 2025-02-10 09:10:02.786125 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-02-10 09:10:02.880096 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-02-10 09:10:02.970945 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-02-10 09:10:03.060553 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-02-10 09:10:03.265895 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-02-10 09:10:03.429084 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-02-10 09:10:03.429779 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-02-10 09:10:03.429845 | orchestrator | 2025-02-10 09:10:03.430476 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-02-10 09:10:03.430963 | orchestrator | 2025-02-10 09:10:03.431407 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-10 09:10:03.432084 | orchestrator | Monday 10 February 2025 09:10:03 +0000 (0:00:00.833) 0:00:01.014 ******* 2025-02-10 09:10:06.208820 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:06.209334 | orchestrator | 2025-02-10 09:10:06.209434 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-02-10 09:10:06.209564 | orchestrator | 2025-02-10 09:10:06.209591 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-10 09:10:06.209618 | orchestrator | Monday 10 February 2025 09:10:06 +0000 (0:00:02.774) 0:00:03.789 ******* 2025-02-10 09:10:08.190124 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:08.190339 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:08.191077 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:08.192844 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:08.192999 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:08.193082 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:08.193105 | orchestrator | 2025-02-10 09:10:08.195625 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-02-10 09:10:08.196574 | orchestrator | 2025-02-10 09:10:08.196695 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-02-10 09:10:09.705034 | orchestrator | Monday 10 February 2025 09:10:08 +0000 (0:00:01.981) 0:00:05.770 ******* 2025-02-10 09:10:09.705191 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:10:09.706774 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:10:09.708080 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:10:09.708116 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:10:09.708623 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:10:09.709685 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-10 09:10:09.710467 | orchestrator | 2025-02-10 09:10:09.711427 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-02-10 09:10:09.712083 | orchestrator | Monday 10 February 2025 09:10:09 +0000 (0:00:01.515) 0:00:07.285 ******* 2025-02-10 09:10:13.344700 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:13.346411 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:13.346464 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:13.347663 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:13.347732 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:13.347760 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:13.348628 | orchestrator | 2025-02-10 09:10:13.349290 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-02-10 09:10:13.349648 | orchestrator | Monday 10 February 2025 09:10:13 +0000 (0:00:03.639) 0:00:10.925 ******* 2025-02-10 09:10:13.526277 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:13.622924 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:13.855662 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:13.954085 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:14.117640 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:14.118573 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:14.121474 | orchestrator | 2025-02-10 09:10:14.121902 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-02-10 09:10:14.121931 | orchestrator | 2025-02-10 09:10:14.121953 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-02-10 09:10:14.123572 | orchestrator | Monday 10 February 2025 09:10:14 +0000 (0:00:00.776) 0:00:11.702 ******* 2025-02-10 09:10:15.994891 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:15.995087 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:15.999100 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:16.000366 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:16.000732 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:16.002115 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:16.002446 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:16.005779 | orchestrator | 2025-02-10 09:10:16.006077 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-02-10 09:10:16.009950 | orchestrator | Monday 10 February 2025 09:10:15 +0000 (0:00:01.875) 0:00:13.577 ******* 2025-02-10 09:10:17.829467 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:17.829698 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:17.831038 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:17.832582 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:17.833383 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:17.834948 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:17.836849 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:17.837264 | orchestrator | 2025-02-10 09:10:17.837815 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-02-10 09:10:17.838792 | orchestrator | Monday 10 February 2025 09:10:17 +0000 (0:00:01.831) 0:00:15.409 ******* 2025-02-10 09:10:19.795648 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:19.795853 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:19.796814 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:19.798604 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:19.800125 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:19.800888 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:19.802781 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:19.803724 | orchestrator | 2025-02-10 09:10:19.803946 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-02-10 09:10:19.804704 | orchestrator | Monday 10 February 2025 09:10:19 +0000 (0:00:01.966) 0:00:17.375 ******* 2025-02-10 09:10:21.457862 | orchestrator | changed: [testbed-manager] 2025-02-10 09:10:21.458483 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:21.459575 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:21.460931 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:21.462513 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:21.463299 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:21.464168 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:21.464457 | orchestrator | 2025-02-10 09:10:21.465164 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-02-10 09:10:21.465684 | orchestrator | Monday 10 February 2025 09:10:21 +0000 (0:00:01.665) 0:00:19.041 ******* 2025-02-10 09:10:21.618174 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:10:21.713840 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:21.798335 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:22.036952 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:22.120554 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:22.262269 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:22.262637 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:22.264319 | orchestrator | 2025-02-10 09:10:22.265262 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-02-10 09:10:22.266125 | orchestrator | 2025-02-10 09:10:22.267176 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-02-10 09:10:22.268090 | orchestrator | Monday 10 February 2025 09:10:22 +0000 (0:00:00.805) 0:00:19.846 ******* 2025-02-10 09:10:24.953501 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:24.953699 | orchestrator | ok: [testbed-manager] 2025-02-10 09:10:24.954443 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:24.954970 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:24.955470 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:24.957250 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:24.957617 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:24.958685 | orchestrator | 2025-02-10 09:10:24.959458 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:10:24.959817 | orchestrator | 2025-02-10 09:10:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:10:24.960109 | orchestrator | 2025-02-10 09:10:24 | INFO  | Please wait and do not abort execution. 2025-02-10 09:10:24.960834 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:10:24.961253 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:24.962110 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:24.962546 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:24.962861 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:24.963207 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:24.963598 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:24.963931 | orchestrator | 2025-02-10 09:10:24.964423 | orchestrator | 2025-02-10 09:10:24.964586 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:10:24.964932 | orchestrator | Monday 10 February 2025 09:10:24 +0000 (0:00:02.689) 0:00:22.536 ******* 2025-02-10 09:10:24.965259 | orchestrator | =============================================================================== 2025-02-10 09:10:24.965682 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.64s 2025-02-10 09:10:24.966004 | orchestrator | Apply netplan configuration --------------------------------------------- 2.77s 2025-02-10 09:10:24.966462 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2025-02-10 09:10:24.966638 | orchestrator | Apply netplan configuration --------------------------------------------- 1.98s 2025-02-10 09:10:24.966942 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.97s 2025-02-10 09:10:24.967381 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.88s 2025-02-10 09:10:24.967627 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.83s 2025-02-10 09:10:24.968533 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.67s 2025-02-10 09:10:24.968739 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2025-02-10 09:10:24.969292 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2025-02-10 09:10:24.969332 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.81s 2025-02-10 09:10:24.969463 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2025-02-10 09:10:25.574333 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-02-10 09:10:27.013828 | orchestrator | 2025-02-10 09:10:27 | INFO  | Task 711ada49-ff7e-4e7b-b3e8-05b884df48f1 (reboot) was prepared for execution. 2025-02-10 09:10:30.206316 | orchestrator | 2025-02-10 09:10:27 | INFO  | It takes a moment until task 711ada49-ff7e-4e7b-b3e8-05b884df48f1 (reboot) has been started and output is visible here. 2025-02-10 09:10:30.206499 | orchestrator | 2025-02-10 09:10:30.207379 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:10:30.208539 | orchestrator | 2025-02-10 09:10:30.208581 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:10:30.210158 | orchestrator | Monday 10 February 2025 09:10:30 +0000 (0:00:00.157) 0:00:00.157 ******* 2025-02-10 09:10:30.307659 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:30.307879 | orchestrator | 2025-02-10 09:10:30.308554 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:10:30.309239 | orchestrator | Monday 10 February 2025 09:10:30 +0000 (0:00:00.105) 0:00:00.263 ******* 2025-02-10 09:10:31.248856 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:10:31.249124 | orchestrator | 2025-02-10 09:10:31.250366 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:10:31.250844 | orchestrator | Monday 10 February 2025 09:10:31 +0000 (0:00:00.940) 0:00:01.203 ******* 2025-02-10 09:10:31.390802 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:10:31.392176 | orchestrator | 2025-02-10 09:10:31.392236 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:10:31.393331 | orchestrator | 2025-02-10 09:10:31.394151 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:10:31.394212 | orchestrator | Monday 10 February 2025 09:10:31 +0000 (0:00:00.139) 0:00:01.343 ******* 2025-02-10 09:10:31.495101 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:31.495644 | orchestrator | 2025-02-10 09:10:31.496575 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:10:31.498580 | orchestrator | Monday 10 February 2025 09:10:31 +0000 (0:00:00.105) 0:00:01.449 ******* 2025-02-10 09:10:32.139941 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:10:32.140188 | orchestrator | 2025-02-10 09:10:32.140689 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:10:32.141506 | orchestrator | Monday 10 February 2025 09:10:32 +0000 (0:00:00.645) 0:00:02.095 ******* 2025-02-10 09:10:32.263463 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:10:32.263640 | orchestrator | 2025-02-10 09:10:32.265401 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:10:32.265977 | orchestrator | 2025-02-10 09:10:32.266006 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:10:32.266058 | orchestrator | Monday 10 February 2025 09:10:32 +0000 (0:00:00.122) 0:00:02.218 ******* 2025-02-10 09:10:32.479165 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:32.479804 | orchestrator | 2025-02-10 09:10:32.480430 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:10:32.481035 | orchestrator | Monday 10 February 2025 09:10:32 +0000 (0:00:00.216) 0:00:02.434 ******* 2025-02-10 09:10:33.157560 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:10:33.157774 | orchestrator | 2025-02-10 09:10:33.157803 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:10:33.157840 | orchestrator | Monday 10 February 2025 09:10:33 +0000 (0:00:00.677) 0:00:03.112 ******* 2025-02-10 09:10:33.266420 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:10:33.267622 | orchestrator | 2025-02-10 09:10:33.267752 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:10:33.267782 | orchestrator | 2025-02-10 09:10:33.268442 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:10:33.269234 | orchestrator | Monday 10 February 2025 09:10:33 +0000 (0:00:00.105) 0:00:03.218 ******* 2025-02-10 09:10:33.355399 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:33.355640 | orchestrator | 2025-02-10 09:10:33.355672 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:10:33.356592 | orchestrator | Monday 10 February 2025 09:10:33 +0000 (0:00:00.093) 0:00:03.311 ******* 2025-02-10 09:10:34.101721 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:10:34.102309 | orchestrator | 2025-02-10 09:10:34.102431 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:10:34.102511 | orchestrator | Monday 10 February 2025 09:10:34 +0000 (0:00:00.745) 0:00:04.057 ******* 2025-02-10 09:10:34.221521 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:10:34.222007 | orchestrator | 2025-02-10 09:10:34.223438 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:10:34.223922 | orchestrator | 2025-02-10 09:10:34.226134 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:10:34.342293 | orchestrator | Monday 10 February 2025 09:10:34 +0000 (0:00:00.116) 0:00:04.173 ******* 2025-02-10 09:10:34.342532 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:34.342629 | orchestrator | 2025-02-10 09:10:34.342735 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:10:34.343138 | orchestrator | Monday 10 February 2025 09:10:34 +0000 (0:00:00.121) 0:00:04.295 ******* 2025-02-10 09:10:35.058946 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:10:35.059148 | orchestrator | 2025-02-10 09:10:35.059449 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:10:35.059498 | orchestrator | Monday 10 February 2025 09:10:35 +0000 (0:00:00.720) 0:00:05.015 ******* 2025-02-10 09:10:35.169846 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:10:35.171304 | orchestrator | 2025-02-10 09:10:35.172139 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-10 09:10:35.172400 | orchestrator | 2025-02-10 09:10:35.173140 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-10 09:10:35.174447 | orchestrator | Monday 10 February 2025 09:10:35 +0000 (0:00:00.107) 0:00:05.122 ******* 2025-02-10 09:10:35.276142 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:35.276960 | orchestrator | 2025-02-10 09:10:35.277406 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-10 09:10:35.278544 | orchestrator | Monday 10 February 2025 09:10:35 +0000 (0:00:00.108) 0:00:05.231 ******* 2025-02-10 09:10:35.929218 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:10:35.930101 | orchestrator | 2025-02-10 09:10:35.930153 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-10 09:10:35.930275 | orchestrator | Monday 10 February 2025 09:10:35 +0000 (0:00:00.652) 0:00:05.883 ******* 2025-02-10 09:10:35.966503 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:10:35.967185 | orchestrator | 2025-02-10 09:10:35.968299 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:10:35.968577 | orchestrator | 2025-02-10 09:10:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:10:35.969735 | orchestrator | 2025-02-10 09:10:35 | INFO  | Please wait and do not abort execution. 2025-02-10 09:10:35.969772 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:35.970598 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:35.971185 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:35.971692 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:35.972040 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:35.973039 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:10:35.973172 | orchestrator | 2025-02-10 09:10:35.973713 | orchestrator | 2025-02-10 09:10:35.974136 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:10:35.974656 | orchestrator | Monday 10 February 2025 09:10:35 +0000 (0:00:00.038) 0:00:05.922 ******* 2025-02-10 09:10:35.975003 | orchestrator | =============================================================================== 2025-02-10 09:10:35.975447 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.38s 2025-02-10 09:10:35.975771 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-02-10 09:10:35.976479 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-02-10 09:10:36.539595 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-02-10 09:10:38.003707 | orchestrator | 2025-02-10 09:10:38 | INFO  | Task a7690765-8e46-4645-a595-6018cb950a42 (wait-for-connection) was prepared for execution. 2025-02-10 09:10:41.303034 | orchestrator | 2025-02-10 09:10:38 | INFO  | It takes a moment until task a7690765-8e46-4645-a595-6018cb950a42 (wait-for-connection) has been started and output is visible here. 2025-02-10 09:10:41.303199 | orchestrator | 2025-02-10 09:10:41.304437 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-02-10 09:10:41.306392 | orchestrator | 2025-02-10 09:10:41.307192 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-02-10 09:10:41.308219 | orchestrator | Monday 10 February 2025 09:10:41 +0000 (0:00:00.228) 0:00:00.228 ******* 2025-02-10 09:10:54.517699 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:10:54.518370 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:10:54.518406 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:10:54.518423 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:10:54.518445 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:10:54.518997 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:10:54.520983 | orchestrator | 2025-02-10 09:10:54.521817 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:10:54.522005 | orchestrator | 2025-02-10 09:10:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:10:54.522211 | orchestrator | 2025-02-10 09:10:54 | INFO  | Please wait and do not abort execution. 2025-02-10 09:10:54.522771 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:10:54.523301 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:10:54.523697 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:10:54.524039 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:10:54.524652 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:10:54.525093 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:10:54.525845 | orchestrator | 2025-02-10 09:10:54.526084 | orchestrator | 2025-02-10 09:10:54.526376 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:10:54.526712 | orchestrator | Monday 10 February 2025 09:10:54 +0000 (0:00:13.215) 0:00:13.443 ******* 2025-02-10 09:10:54.527099 | orchestrator | =============================================================================== 2025-02-10 09:10:54.527500 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.22s 2025-02-10 09:10:55.071445 | orchestrator | + osism apply hddtemp 2025-02-10 09:10:56.521059 | orchestrator | 2025-02-10 09:10:56 | INFO  | Task 6d12caac-c755-4d46-b662-83b1a0c3a821 (hddtemp) was prepared for execution. 2025-02-10 09:10:59.788736 | orchestrator | 2025-02-10 09:10:56 | INFO  | It takes a moment until task 6d12caac-c755-4d46-b662-83b1a0c3a821 (hddtemp) has been started and output is visible here. 2025-02-10 09:10:59.788941 | orchestrator | 2025-02-10 09:10:59.790245 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-02-10 09:10:59.790309 | orchestrator | 2025-02-10 09:10:59.793401 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-02-10 09:10:59.795914 | orchestrator | Monday 10 February 2025 09:10:59 +0000 (0:00:00.225) 0:00:00.225 ******* 2025-02-10 09:10:59.950089 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:00.028327 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:00.113131 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:00.187903 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:00.363950 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:00.510871 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:00.511072 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:00.511097 | orchestrator | 2025-02-10 09:11:00.511597 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-02-10 09:11:00.511625 | orchestrator | Monday 10 February 2025 09:11:00 +0000 (0:00:00.720) 0:00:00.946 ******* 2025-02-10 09:11:01.721741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:11:01.721999 | orchestrator | 2025-02-10 09:11:01.722739 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-02-10 09:11:01.729049 | orchestrator | Monday 10 February 2025 09:11:01 +0000 (0:00:01.210) 0:00:02.156 ******* 2025-02-10 09:11:03.904573 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:03.905027 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:03.907196 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:03.907242 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:03.908053 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:03.908776 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:03.910899 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:03.911544 | orchestrator | 2025-02-10 09:11:03.912449 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-02-10 09:11:03.913153 | orchestrator | Monday 10 February 2025 09:11:03 +0000 (0:00:02.185) 0:00:04.341 ******* 2025-02-10 09:11:04.530469 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:04.624066 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:05.066302 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:05.067407 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:05.067430 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:05.067676 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:05.071449 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:05.072160 | orchestrator | 2025-02-10 09:11:05.072614 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-02-10 09:11:05.073426 | orchestrator | Monday 10 February 2025 09:11:05 +0000 (0:00:01.161) 0:00:05.503 ******* 2025-02-10 09:11:06.401861 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:06.402691 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:06.405228 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:06.405457 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:06.406238 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:06.407310 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:06.408146 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:06.409122 | orchestrator | 2025-02-10 09:11:06.409657 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-02-10 09:11:06.410394 | orchestrator | Monday 10 February 2025 09:11:06 +0000 (0:00:01.331) 0:00:06.835 ******* 2025-02-10 09:11:06.666814 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:11:06.751705 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:11:06.830984 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:11:06.913105 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:11:07.050623 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:11:07.053106 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:20.431538 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:11:20.431735 | orchestrator | 2025-02-10 09:11:20.431752 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-02-10 09:11:20.431763 | orchestrator | Monday 10 February 2025 09:11:07 +0000 (0:00:00.651) 0:00:07.487 ******* 2025-02-10 09:11:20.431787 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:20.431838 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:20.431849 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:20.431858 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:20.431869 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:20.432751 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:20.433323 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:20.434207 | orchestrator | 2025-02-10 09:11:20.434682 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-02-10 09:11:20.435105 | orchestrator | Monday 10 February 2025 09:11:20 +0000 (0:00:13.371) 0:00:20.858 ******* 2025-02-10 09:11:21.746557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:11:21.746882 | orchestrator | 2025-02-10 09:11:21.746945 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-02-10 09:11:23.702664 | orchestrator | Monday 10 February 2025 09:11:21 +0000 (0:00:01.314) 0:00:22.172 ******* 2025-02-10 09:11:23.702864 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:11:23.702954 | orchestrator | changed: [testbed-manager] 2025-02-10 09:11:23.704141 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:11:23.705946 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:11:23.707284 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:11:23.707318 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:11:23.708492 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:11:23.708524 | orchestrator | 2025-02-10 09:11:23.708712 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:11:23.709007 | orchestrator | 2025-02-10 09:11:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:11:23.710250 | orchestrator | 2025-02-10 09:11:23 | INFO  | Please wait and do not abort execution. 2025-02-10 09:11:23.710283 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:11:23.711674 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:23.712656 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:23.713674 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:23.714834 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:23.717580 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:23.718725 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:23.719478 | orchestrator | 2025-02-10 09:11:23.720735 | orchestrator | 2025-02-10 09:11:23.721789 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:11:23.722940 | orchestrator | Monday 10 February 2025 09:11:23 +0000 (0:00:01.967) 0:00:24.139 ******* 2025-02-10 09:11:23.724632 | orchestrator | =============================================================================== 2025-02-10 09:11:23.725258 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.37s 2025-02-10 09:11:23.726790 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.19s 2025-02-10 09:11:23.727198 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2025-02-10 09:11:23.728394 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.33s 2025-02-10 09:11:23.729160 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.31s 2025-02-10 09:11:23.730411 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-02-10 09:11:23.731128 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2025-02-10 09:11:23.731751 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2025-02-10 09:11:23.732649 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.65s 2025-02-10 09:11:24.330307 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-02-10 09:11:27.759982 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-10 09:11:27.805105 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-10 09:11:27.805269 | orchestrator | + local max_attempts=60 2025-02-10 09:11:27.805304 | orchestrator | + local name=ceph-ansible 2025-02-10 09:11:27.805329 | orchestrator | + local attempt_num=1 2025-02-10 09:11:27.805397 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-10 09:11:27.805445 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 09:11:27.805576 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-10 09:11:27.805607 | orchestrator | + local max_attempts=60 2025-02-10 09:11:27.805632 | orchestrator | + local name=kolla-ansible 2025-02-10 09:11:27.805657 | orchestrator | + local attempt_num=1 2025-02-10 09:11:27.805733 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-10 09:11:27.841724 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 09:11:27.841895 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-10 09:11:27.841918 | orchestrator | + local max_attempts=60 2025-02-10 09:11:27.841934 | orchestrator | + local name=osism-ansible 2025-02-10 09:11:27.841949 | orchestrator | + local attempt_num=1 2025-02-10 09:11:27.841970 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-10 09:11:27.871200 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-10 09:11:28.072757 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-10 09:11:28.072954 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-10 09:11:28.073003 | orchestrator | ARA in ceph-ansible already disabled. 2025-02-10 09:11:28.233584 | orchestrator | ARA in kolla-ansible already disabled. 2025-02-10 09:11:28.415619 | orchestrator | ARA in osism-ansible already disabled. 2025-02-10 09:11:28.591415 | orchestrator | ARA in osism-kubernetes already disabled. 2025-02-10 09:11:28.591912 | orchestrator | + osism apply gather-facts 2025-02-10 09:11:30.146510 | orchestrator | 2025-02-10 09:11:30 | INFO  | Task 7de3146f-2f07-49b5-af9e-d5ad266aad26 (gather-facts) was prepared for execution. 2025-02-10 09:11:33.582697 | orchestrator | 2025-02-10 09:11:30 | INFO  | It takes a moment until task 7de3146f-2f07-49b5-af9e-d5ad266aad26 (gather-facts) has been started and output is visible here. 2025-02-10 09:11:33.582887 | orchestrator | 2025-02-10 09:11:33.585299 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:11:33.585422 | orchestrator | 2025-02-10 09:11:33.587300 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:11:33.588159 | orchestrator | Monday 10 February 2025 09:11:33 +0000 (0:00:00.192) 0:00:00.192 ******* 2025-02-10 09:11:38.958386 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:11:38.958609 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:11:38.961835 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:11:38.962555 | orchestrator | ok: [testbed-manager] 2025-02-10 09:11:38.962577 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:11:38.962589 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:11:38.963655 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:11:38.964164 | orchestrator | 2025-02-10 09:11:38.965450 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:11:38.965903 | orchestrator | 2025-02-10 09:11:38.966487 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:11:38.969145 | orchestrator | Monday 10 February 2025 09:11:38 +0000 (0:00:05.380) 0:00:05.572 ******* 2025-02-10 09:11:39.131365 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:11:39.231067 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:11:39.310778 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:11:39.391877 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:11:39.478426 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:11:39.526859 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:11:39.527729 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:11:39.528665 | orchestrator | 2025-02-10 09:11:39.531029 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:11:39.531159 | orchestrator | 2025-02-10 09:11:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:11:39.531432 | orchestrator | 2025-02-10 09:11:39 | INFO  | Please wait and do not abort execution. 2025-02-10 09:11:39.531467 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.532391 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.533170 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.534214 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.534827 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.535976 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.536834 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:11:39.537393 | orchestrator | 2025-02-10 09:11:39.538000 | orchestrator | 2025-02-10 09:11:39.538713 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:11:39.539335 | orchestrator | Monday 10 February 2025 09:11:39 +0000 (0:00:00.570) 0:00:06.143 ******* 2025-02-10 09:11:39.539910 | orchestrator | =============================================================================== 2025-02-10 09:11:39.540527 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.38s 2025-02-10 09:11:39.541122 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-02-10 09:11:40.180884 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-02-10 09:11:40.198887 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-02-10 09:11:40.220186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-02-10 09:11:40.238932 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-02-10 09:11:40.255957 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-02-10 09:11:40.271022 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-02-10 09:11:40.287077 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-02-10 09:11:40.305932 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-02-10 09:11:40.320270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-02-10 09:11:40.340061 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-02-10 09:11:40.360497 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-02-10 09:11:40.380113 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-02-10 09:11:40.400534 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-02-10 09:11:40.419479 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-02-10 09:11:40.437385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-02-10 09:11:40.455904 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-02-10 09:11:40.473619 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-02-10 09:11:40.497186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-02-10 09:11:40.518484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-02-10 09:11:40.533928 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-02-10 09:11:40.552086 | orchestrator | + [[ false == \t\r\u\e ]] 2025-02-10 09:11:40.711452 | orchestrator | changed 2025-02-10 09:11:40.768740 | 2025-02-10 09:11:40.768881 | TASK [Deploy services] 2025-02-10 09:11:40.876144 | orchestrator | skipping: Conditional result was False 2025-02-10 09:11:40.893702 | 2025-02-10 09:11:40.893835 | TASK [Deploy in a nutshell] 2025-02-10 09:11:41.666086 | orchestrator | 2025-02-10 09:11:41.667040 | orchestrator | # PULL IMAGES 2025-02-10 09:11:41.667084 | orchestrator | 2025-02-10 09:11:41.667103 | orchestrator | + set -e 2025-02-10 09:11:41.667152 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:11:41.667177 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:11:41.667196 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:11:41.667226 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:11:41.667252 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:11:41.667268 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 09:11:41.667281 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 09:11:41.667296 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 09:11:41.667309 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 09:11:41.667323 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 09:11:41.667369 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 09:11:41.667384 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 09:11:41.667398 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 09:11:41.667413 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 09:11:41.667427 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 09:11:41.667441 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 09:11:41.667455 | orchestrator | ++ export ARA=false 2025-02-10 09:11:41.667469 | orchestrator | ++ ARA=false 2025-02-10 09:11:41.667483 | orchestrator | ++ export TEMPEST=false 2025-02-10 09:11:41.667497 | orchestrator | ++ TEMPEST=false 2025-02-10 09:11:41.667512 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 09:11:41.667526 | orchestrator | ++ IS_ZUUL=true 2025-02-10 09:11:41.667540 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 09:11:41.667555 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 09:11:41.667569 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 09:11:41.667583 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 09:11:41.667597 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 09:11:41.667611 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 09:11:41.667632 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 09:11:41.667647 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 09:11:41.667661 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 09:11:41.667675 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 09:11:41.667689 | orchestrator | + echo 2025-02-10 09:11:41.667703 | orchestrator | + echo '# PULL IMAGES' 2025-02-10 09:11:41.667717 | orchestrator | + echo 2025-02-10 09:11:41.667742 | orchestrator | ++ semver latest 7.0.0 2025-02-10 09:11:41.736863 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-10 09:11:43.189580 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 09:11:43.189724 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-02-10 09:11:43.189782 | orchestrator | 2025-02-10 09:11:43 | INFO  | Trying to run play pull-images in environment custom 2025-02-10 09:11:43.238914 | orchestrator | 2025-02-10 09:11:43 | INFO  | Task 6719dacf-3a4c-42e2-b48e-02f0d28cba6f (pull-images) was prepared for execution. 2025-02-10 09:11:46.581683 | orchestrator | 2025-02-10 09:11:43 | INFO  | It takes a moment until task 6719dacf-3a4c-42e2-b48e-02f0d28cba6f (pull-images) has been started and output is visible here. 2025-02-10 09:11:46.581863 | orchestrator | 2025-02-10 09:11:46.589590 | orchestrator | PLAY [Pull images] ************************************************************* 2025-02-10 09:12:27.758676 | orchestrator | 2025-02-10 09:12:27.758866 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-02-10 09:12:27.758890 | orchestrator | Monday 10 February 2025 09:11:46 +0000 (0:00:00.171) 0:00:00.171 ******* 2025-02-10 09:12:27.758928 | orchestrator | changed: [testbed-manager] 2025-02-10 09:13:19.551919 | orchestrator | 2025-02-10 09:13:19.552098 | orchestrator | TASK [Pull other images] ******************************************************* 2025-02-10 09:13:19.552120 | orchestrator | Monday 10 February 2025 09:12:27 +0000 (0:00:41.186) 0:00:41.358 ******* 2025-02-10 09:13:19.552154 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-02-10 09:13:19.552325 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-02-10 09:13:19.552496 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-02-10 09:13:19.552549 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-02-10 09:13:19.553155 | orchestrator | changed: [testbed-manager] => (item=common) 2025-02-10 09:13:19.553767 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-02-10 09:13:19.556796 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-02-10 09:13:19.557047 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-02-10 09:13:19.557664 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-02-10 09:13:19.558458 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-02-10 09:13:19.559327 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-02-10 09:13:19.560393 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-02-10 09:13:19.561664 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-02-10 09:13:19.562009 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-02-10 09:13:19.562795 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-02-10 09:13:19.564210 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-02-10 09:13:19.568305 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-02-10 09:13:19.568962 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-02-10 09:13:19.569518 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-02-10 09:13:19.569785 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-02-10 09:13:19.570284 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-02-10 09:13:19.570784 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-02-10 09:13:19.571272 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-02-10 09:13:19.571936 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-02-10 09:13:19.575216 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-02-10 09:13:19.576282 | orchestrator | 2025-02-10 09:13:19.576314 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:13:19.576332 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:13:19.576372 | orchestrator | 2025-02-10 09:13:19.576386 | orchestrator | 2025-02-10 09:13:19.576401 | orchestrator | 2025-02-10 09:13:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:13:19.576417 | orchestrator | 2025-02-10 09:13:19 | INFO  | Please wait and do not abort execution. 2025-02-10 09:13:19.576438 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:13:19.576781 | orchestrator | Monday 10 February 2025 09:13:19 +0000 (0:00:51.792) 0:01:33.150 ******* 2025-02-10 09:13:19.577761 | orchestrator | =============================================================================== 2025-02-10 09:13:19.578212 | orchestrator | Pull other images ------------------------------------------------------ 51.79s 2025-02-10 09:13:19.578744 | orchestrator | Pull keystone image ---------------------------------------------------- 41.19s 2025-02-10 09:13:21.639649 | orchestrator | 2025-02-10 09:13:21 | INFO  | Trying to run play wipe-partitions in environment custom 2025-02-10 09:13:21.690115 | orchestrator | 2025-02-10 09:13:21 | INFO  | Task db004748-a339-492d-a4ed-5c7c554d90c7 (wipe-partitions) was prepared for execution. 2025-02-10 09:13:25.059227 | orchestrator | 2025-02-10 09:13:21 | INFO  | It takes a moment until task db004748-a339-492d-a4ed-5c7c554d90c7 (wipe-partitions) has been started and output is visible here. 2025-02-10 09:13:25.059456 | orchestrator | 2025-02-10 09:13:25.061194 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-02-10 09:13:25.062078 | orchestrator | 2025-02-10 09:13:25.062466 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-02-10 09:13:25.063100 | orchestrator | Monday 10 February 2025 09:13:25 +0000 (0:00:00.130) 0:00:00.130 ******* 2025-02-10 09:13:25.628080 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:13:25.629090 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:13:25.629387 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:13:25.631138 | orchestrator | 2025-02-10 09:13:25.632677 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-02-10 09:13:25.633253 | orchestrator | Monday 10 February 2025 09:13:25 +0000 (0:00:00.569) 0:00:00.699 ******* 2025-02-10 09:13:25.799021 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:25.899861 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:13:25.902302 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:13:25.902489 | orchestrator | 2025-02-10 09:13:25.902584 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-02-10 09:13:25.903154 | orchestrator | Monday 10 February 2025 09:13:25 +0000 (0:00:00.274) 0:00:00.973 ******* 2025-02-10 09:13:26.630426 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:13:26.630871 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:13:26.631506 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:13:26.635428 | orchestrator | 2025-02-10 09:13:26.801394 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-02-10 09:13:26.801533 | orchestrator | Monday 10 February 2025 09:13:26 +0000 (0:00:00.731) 0:00:01.705 ******* 2025-02-10 09:13:26.801571 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:26.901089 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:13:26.906295 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:13:26.906800 | orchestrator | 2025-02-10 09:13:26.907774 | orchestrator | TASK [Check device availability] *********************************************** 2025-02-10 09:13:26.908222 | orchestrator | Monday 10 February 2025 09:13:26 +0000 (0:00:00.270) 0:00:01.975 ******* 2025-02-10 09:13:28.195538 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-10 09:13:28.197857 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-10 09:13:28.198308 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-10 09:13:28.198367 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-10 09:13:28.198841 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-10 09:13:28.200198 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-10 09:13:28.200419 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-10 09:13:28.202103 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-10 09:13:28.203181 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-10 09:13:28.203211 | orchestrator | 2025-02-10 09:13:28.203951 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-02-10 09:13:28.203983 | orchestrator | Monday 10 February 2025 09:13:28 +0000 (0:00:01.294) 0:00:03.270 ******* 2025-02-10 09:13:29.672957 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-02-10 09:13:29.677516 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-02-10 09:13:29.677632 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-02-10 09:13:29.677662 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-02-10 09:13:29.677710 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-02-10 09:13:29.677808 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-02-10 09:13:29.677834 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-02-10 09:13:29.677859 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-02-10 09:13:29.677882 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-02-10 09:13:29.677960 | orchestrator | 2025-02-10 09:13:29.678122 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-02-10 09:13:29.678852 | orchestrator | Monday 10 February 2025 09:13:29 +0000 (0:00:01.475) 0:00:04.745 ******* 2025-02-10 09:13:32.019180 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-10 09:13:32.019612 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-10 09:13:32.019654 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-10 09:13:32.019699 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-10 09:13:32.021166 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-10 09:13:32.026393 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-10 09:13:32.026570 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-10 09:13:32.026590 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-10 09:13:32.026602 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-10 09:13:32.026644 | orchestrator | 2025-02-10 09:13:32.026942 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-02-10 09:13:32.027135 | orchestrator | Monday 10 February 2025 09:13:32 +0000 (0:00:02.348) 0:00:07.094 ******* 2025-02-10 09:13:32.664954 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:13:32.666111 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:13:32.666150 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:13:32.666198 | orchestrator | 2025-02-10 09:13:32.666613 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-02-10 09:13:32.667209 | orchestrator | Monday 10 February 2025 09:13:32 +0000 (0:00:00.641) 0:00:07.735 ******* 2025-02-10 09:13:33.295895 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:13:33.296495 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:13:33.296540 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:13:33.296853 | orchestrator | 2025-02-10 09:13:33.297591 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:13:33.297812 | orchestrator | 2025-02-10 09:13:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:13:33.297928 | orchestrator | 2025-02-10 09:13:33 | INFO  | Please wait and do not abort execution. 2025-02-10 09:13:33.298736 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:33.300090 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:33.300577 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:33.301039 | orchestrator | 2025-02-10 09:13:33.301757 | orchestrator | 2025-02-10 09:13:33.302092 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:13:33.302902 | orchestrator | Monday 10 February 2025 09:13:33 +0000 (0:00:00.633) 0:00:08.369 ******* 2025-02-10 09:13:33.303470 | orchestrator | =============================================================================== 2025-02-10 09:13:33.303747 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.35s 2025-02-10 09:13:33.304441 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.48s 2025-02-10 09:13:33.305018 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2025-02-10 09:13:33.305547 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-02-10 09:13:33.306285 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2025-02-10 09:13:33.306754 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-02-10 09:13:33.307417 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-02-10 09:13:33.308059 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-02-10 09:13:33.308315 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-02-10 09:13:35.384201 | orchestrator | 2025-02-10 09:13:35 | INFO  | Task 89f69f9c-cf53-4772-9b84-e49b850e6b32 (facts) was prepared for execution. 2025-02-10 09:13:38.814597 | orchestrator | 2025-02-10 09:13:35 | INFO  | It takes a moment until task 89f69f9c-cf53-4772-9b84-e49b850e6b32 (facts) has been started and output is visible here. 2025-02-10 09:13:38.814806 | orchestrator | 2025-02-10 09:13:38.818325 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-10 09:13:38.818427 | orchestrator | 2025-02-10 09:13:38.821549 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:13:40.080896 | orchestrator | Monday 10 February 2025 09:13:38 +0000 (0:00:00.212) 0:00:00.212 ******* 2025-02-10 09:13:40.081157 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:13:40.081248 | orchestrator | ok: [testbed-manager] 2025-02-10 09:13:40.082384 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:13:40.085575 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:13:40.086086 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:13:40.086558 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:13:40.090213 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:13:40.092316 | orchestrator | 2025-02-10 09:13:40.092446 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:13:40.092482 | orchestrator | Monday 10 February 2025 09:13:40 +0000 (0:00:01.268) 0:00:01.480 ******* 2025-02-10 09:13:40.255937 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:13:40.337816 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:13:40.424511 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:13:40.506785 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:13:40.582537 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:41.388088 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:13:41.389657 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:13:41.389743 | orchestrator | 2025-02-10 09:13:41.389854 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:13:41.390976 | orchestrator | 2025-02-10 09:13:41.392219 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:13:41.392545 | orchestrator | Monday 10 February 2025 09:13:41 +0000 (0:00:01.311) 0:00:02.792 ******* 2025-02-10 09:13:47.642877 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:13:47.643116 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:13:47.643138 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:13:47.643147 | orchestrator | ok: [testbed-manager] 2025-02-10 09:13:47.643154 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:13:47.643161 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:13:47.643173 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:13:47.643431 | orchestrator | 2025-02-10 09:13:47.644321 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:13:47.644582 | orchestrator | 2025-02-10 09:13:47.644883 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:13:47.647229 | orchestrator | Monday 10 February 2025 09:13:47 +0000 (0:00:06.249) 0:00:09.042 ******* 2025-02-10 09:13:47.882951 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:13:48.000667 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:13:48.125733 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:13:48.215236 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:13:48.301474 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:48.355415 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:13:48.356635 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:13:48.356715 | orchestrator | 2025-02-10 09:13:48.356777 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:13:48.357200 | orchestrator | 2025-02-10 09:13:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:13:48.360591 | orchestrator | 2025-02-10 09:13:48 | INFO  | Please wait and do not abort execution. 2025-02-10 09:13:48.360624 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.360717 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.360917 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.361266 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.361556 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.361899 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.362156 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:13:48.362473 | orchestrator | 2025-02-10 09:13:48.362672 | orchestrator | 2025-02-10 09:13:48.363020 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:13:48.363246 | orchestrator | Monday 10 February 2025 09:13:48 +0000 (0:00:00.717) 0:00:09.760 ******* 2025-02-10 09:13:48.363512 | orchestrator | =============================================================================== 2025-02-10 09:13:48.363711 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.25s 2025-02-10 09:13:48.364001 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2025-02-10 09:13:48.364287 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.27s 2025-02-10 09:13:48.364453 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-02-10 09:13:50.792482 | orchestrator | 2025-02-10 09:13:50 | INFO  | Task 993c8395-d46d-4f6f-b168-ad9c27c96616 (ceph-configure-lvm-volumes) was prepared for execution. 2025-02-10 09:13:55.427793 | orchestrator | 2025-02-10 09:13:50 | INFO  | It takes a moment until task 993c8395-d46d-4f6f-b168-ad9c27c96616 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-02-10 09:13:55.427967 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:13:56.130330 | orchestrator | 2025-02-10 09:13:56.130605 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-10 09:13:56.131666 | orchestrator | 2025-02-10 09:13:56.132022 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:13:56.132569 | orchestrator | Monday 10 February 2025 09:13:56 +0000 (0:00:00.562) 0:00:00.562 ******* 2025-02-10 09:13:56.437003 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:13:56.440232 | orchestrator | 2025-02-10 09:13:56.672929 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:13:56.673096 | orchestrator | Monday 10 February 2025 09:13:56 +0000 (0:00:00.309) 0:00:00.872 ******* 2025-02-10 09:13:56.673132 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:13:56.673381 | orchestrator | 2025-02-10 09:13:56.673413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:56.673969 | orchestrator | Monday 10 February 2025 09:13:56 +0000 (0:00:00.237) 0:00:01.109 ******* 2025-02-10 09:13:57.235633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:13:57.235815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:13:57.236107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:13:57.236559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:13:57.236978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:13:57.239166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:13:57.239731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:13:57.239846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:13:57.240234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-10 09:13:57.240331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:13:57.241210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:13:57.241641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:13:57.241673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:13:57.241863 | orchestrator | 2025-02-10 09:13:57.242147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:57.246886 | orchestrator | Monday 10 February 2025 09:13:57 +0000 (0:00:00.560) 0:00:01.669 ******* 2025-02-10 09:13:57.424944 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:57.426080 | orchestrator | 2025-02-10 09:13:57.429836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:57.430070 | orchestrator | Monday 10 February 2025 09:13:57 +0000 (0:00:00.189) 0:00:01.859 ******* 2025-02-10 09:13:57.650096 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:57.651546 | orchestrator | 2025-02-10 09:13:57.654596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:57.654745 | orchestrator | Monday 10 February 2025 09:13:57 +0000 (0:00:00.223) 0:00:02.083 ******* 2025-02-10 09:13:57.825648 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:57.826304 | orchestrator | 2025-02-10 09:13:57.828590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:57.829772 | orchestrator | Monday 10 February 2025 09:13:57 +0000 (0:00:00.178) 0:00:02.261 ******* 2025-02-10 09:13:58.036038 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:58.036557 | orchestrator | 2025-02-10 09:13:58.036591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:58.036913 | orchestrator | Monday 10 February 2025 09:13:58 +0000 (0:00:00.204) 0:00:02.466 ******* 2025-02-10 09:13:58.303799 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:58.306000 | orchestrator | 2025-02-10 09:13:58.308022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:58.308091 | orchestrator | Monday 10 February 2025 09:13:58 +0000 (0:00:00.267) 0:00:02.733 ******* 2025-02-10 09:13:58.559935 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:58.560159 | orchestrator | 2025-02-10 09:13:58.560551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:58.561233 | orchestrator | Monday 10 February 2025 09:13:58 +0000 (0:00:00.260) 0:00:02.994 ******* 2025-02-10 09:13:58.790417 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:58.791740 | orchestrator | 2025-02-10 09:13:58.792598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:58.793489 | orchestrator | Monday 10 February 2025 09:13:58 +0000 (0:00:00.230) 0:00:03.224 ******* 2025-02-10 09:13:59.182736 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:13:59.182950 | orchestrator | 2025-02-10 09:13:59.183476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:13:59.184620 | orchestrator | Monday 10 February 2025 09:13:59 +0000 (0:00:00.389) 0:00:03.614 ******* 2025-02-10 09:14:00.168884 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84) 2025-02-10 09:14:00.169564 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84) 2025-02-10 09:14:00.170144 | orchestrator | 2025-02-10 09:14:00.170900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:00.171647 | orchestrator | Monday 10 February 2025 09:14:00 +0000 (0:00:00.986) 0:00:04.600 ******* 2025-02-10 09:14:00.741396 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598) 2025-02-10 09:14:00.741726 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598) 2025-02-10 09:14:00.742011 | orchestrator | 2025-02-10 09:14:00.742128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:00.744739 | orchestrator | Monday 10 February 2025 09:14:00 +0000 (0:00:00.575) 0:00:05.176 ******* 2025-02-10 09:14:01.315916 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a) 2025-02-10 09:14:01.319081 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a) 2025-02-10 09:14:01.319655 | orchestrator | 2025-02-10 09:14:01.320480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:01.321892 | orchestrator | Monday 10 February 2025 09:14:01 +0000 (0:00:00.571) 0:00:05.748 ******* 2025-02-10 09:14:01.902860 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51) 2025-02-10 09:14:01.904567 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51) 2025-02-10 09:14:01.906614 | orchestrator | 2025-02-10 09:14:01.907263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:01.908843 | orchestrator | Monday 10 February 2025 09:14:01 +0000 (0:00:00.587) 0:00:06.335 ******* 2025-02-10 09:14:02.327001 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:14:02.328643 | orchestrator | 2025-02-10 09:14:02.328750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:02.329520 | orchestrator | Monday 10 February 2025 09:14:02 +0000 (0:00:00.424) 0:00:06.760 ******* 2025-02-10 09:14:02.842917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:14:02.843996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:14:02.844057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:14:02.844096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:14:02.845217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:14:02.846215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:14:02.849290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:14:02.852124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:14:02.852180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-10 09:14:02.852195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:14:02.852224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:14:02.852719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:14:02.855124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:14:02.855262 | orchestrator | 2025-02-10 09:14:02.855751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:02.856132 | orchestrator | Monday 10 February 2025 09:14:02 +0000 (0:00:00.515) 0:00:07.275 ******* 2025-02-10 09:14:03.040084 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:03.040676 | orchestrator | 2025-02-10 09:14:03.041165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:03.041857 | orchestrator | Monday 10 February 2025 09:14:03 +0000 (0:00:00.199) 0:00:07.475 ******* 2025-02-10 09:14:03.259932 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:03.260620 | orchestrator | 2025-02-10 09:14:03.261095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:03.261808 | orchestrator | Monday 10 February 2025 09:14:03 +0000 (0:00:00.218) 0:00:07.693 ******* 2025-02-10 09:14:03.470319 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:03.470633 | orchestrator | 2025-02-10 09:14:03.711331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:03.711559 | orchestrator | Monday 10 February 2025 09:14:03 +0000 (0:00:00.211) 0:00:07.905 ******* 2025-02-10 09:14:03.711614 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:03.712784 | orchestrator | 2025-02-10 09:14:03.712873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:03.712897 | orchestrator | Monday 10 February 2025 09:14:03 +0000 (0:00:00.238) 0:00:08.144 ******* 2025-02-10 09:14:04.301949 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:04.302531 | orchestrator | 2025-02-10 09:14:04.302911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:04.303395 | orchestrator | Monday 10 February 2025 09:14:04 +0000 (0:00:00.592) 0:00:08.736 ******* 2025-02-10 09:14:04.542967 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:04.544711 | orchestrator | 2025-02-10 09:14:04.544804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:04.549636 | orchestrator | Monday 10 February 2025 09:14:04 +0000 (0:00:00.241) 0:00:08.978 ******* 2025-02-10 09:14:04.792602 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:04.795255 | orchestrator | 2025-02-10 09:14:04.799640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:04.800061 | orchestrator | Monday 10 February 2025 09:14:04 +0000 (0:00:00.250) 0:00:09.228 ******* 2025-02-10 09:14:05.077507 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:05.078640 | orchestrator | 2025-02-10 09:14:05.079712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:05.080177 | orchestrator | Monday 10 February 2025 09:14:05 +0000 (0:00:00.284) 0:00:09.513 ******* 2025-02-10 09:14:05.818990 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-10 09:14:05.819797 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-10 09:14:05.819846 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-10 09:14:05.819922 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-10 09:14:05.820418 | orchestrator | 2025-02-10 09:14:05.820965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:05.821460 | orchestrator | Monday 10 February 2025 09:14:05 +0000 (0:00:00.735) 0:00:10.248 ******* 2025-02-10 09:14:06.049871 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:06.050766 | orchestrator | 2025-02-10 09:14:06.052552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:06.053145 | orchestrator | Monday 10 February 2025 09:14:06 +0000 (0:00:00.235) 0:00:10.484 ******* 2025-02-10 09:14:06.351252 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:06.352740 | orchestrator | 2025-02-10 09:14:06.352824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:06.353750 | orchestrator | Monday 10 February 2025 09:14:06 +0000 (0:00:00.300) 0:00:10.784 ******* 2025-02-10 09:14:06.662269 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:06.664807 | orchestrator | 2025-02-10 09:14:06.664963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:06.667588 | orchestrator | Monday 10 February 2025 09:14:06 +0000 (0:00:00.310) 0:00:11.094 ******* 2025-02-10 09:14:06.907752 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:06.908015 | orchestrator | 2025-02-10 09:14:06.908046 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-10 09:14:06.908625 | orchestrator | Monday 10 February 2025 09:14:06 +0000 (0:00:00.243) 0:00:11.338 ******* 2025-02-10 09:14:07.125595 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-02-10 09:14:07.125808 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-02-10 09:14:07.126856 | orchestrator | 2025-02-10 09:14:07.127672 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-10 09:14:07.128456 | orchestrator | Monday 10 February 2025 09:14:07 +0000 (0:00:00.222) 0:00:11.561 ******* 2025-02-10 09:14:07.523180 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:07.524433 | orchestrator | 2025-02-10 09:14:07.524682 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-10 09:14:07.525120 | orchestrator | Monday 10 February 2025 09:14:07 +0000 (0:00:00.391) 0:00:11.952 ******* 2025-02-10 09:14:07.684484 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:07.684798 | orchestrator | 2025-02-10 09:14:07.687593 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-10 09:14:07.687648 | orchestrator | Monday 10 February 2025 09:14:07 +0000 (0:00:00.164) 0:00:12.117 ******* 2025-02-10 09:14:07.858314 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:07.860670 | orchestrator | 2025-02-10 09:14:07.861037 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-10 09:14:07.861703 | orchestrator | Monday 10 February 2025 09:14:07 +0000 (0:00:00.171) 0:00:12.288 ******* 2025-02-10 09:14:08.012636 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:14:08.013570 | orchestrator | 2025-02-10 09:14:08.013590 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-10 09:14:08.013675 | orchestrator | Monday 10 February 2025 09:14:08 +0000 (0:00:00.153) 0:00:12.442 ******* 2025-02-10 09:14:08.224525 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f024456c-4135-5029-bf0e-13fb105dc5b7'}}) 2025-02-10 09:14:08.224957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3ebd317-95a0-5383-a134-14be01baa44d'}}) 2025-02-10 09:14:08.225279 | orchestrator | 2025-02-10 09:14:08.226429 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-10 09:14:08.226787 | orchestrator | Monday 10 February 2025 09:14:08 +0000 (0:00:00.216) 0:00:12.659 ******* 2025-02-10 09:14:08.397227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f024456c-4135-5029-bf0e-13fb105dc5b7'}})  2025-02-10 09:14:08.401545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3ebd317-95a0-5383-a134-14be01baa44d'}})  2025-02-10 09:14:08.401611 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:08.401653 | orchestrator | 2025-02-10 09:14:08.402320 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-10 09:14:08.403682 | orchestrator | Monday 10 February 2025 09:14:08 +0000 (0:00:00.169) 0:00:12.828 ******* 2025-02-10 09:14:08.591639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f024456c-4135-5029-bf0e-13fb105dc5b7'}})  2025-02-10 09:14:08.593443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3ebd317-95a0-5383-a134-14be01baa44d'}})  2025-02-10 09:14:08.594055 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:08.594810 | orchestrator | 2025-02-10 09:14:08.599514 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-10 09:14:08.599814 | orchestrator | Monday 10 February 2025 09:14:08 +0000 (0:00:00.198) 0:00:13.027 ******* 2025-02-10 09:14:08.790114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f024456c-4135-5029-bf0e-13fb105dc5b7'}})  2025-02-10 09:14:08.791483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3ebd317-95a0-5383-a134-14be01baa44d'}})  2025-02-10 09:14:08.792552 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:08.797067 | orchestrator | 2025-02-10 09:14:08.974538 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-10 09:14:08.974655 | orchestrator | Monday 10 February 2025 09:14:08 +0000 (0:00:00.198) 0:00:13.225 ******* 2025-02-10 09:14:08.974678 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:14:08.974742 | orchestrator | 2025-02-10 09:14:08.975408 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-10 09:14:08.975963 | orchestrator | Monday 10 February 2025 09:14:08 +0000 (0:00:00.184) 0:00:13.410 ******* 2025-02-10 09:14:09.178587 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:14:09.180525 | orchestrator | 2025-02-10 09:14:09.180777 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-10 09:14:09.183362 | orchestrator | Monday 10 February 2025 09:14:09 +0000 (0:00:00.204) 0:00:13.614 ******* 2025-02-10 09:14:09.393822 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:09.394252 | orchestrator | 2025-02-10 09:14:09.396452 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-10 09:14:09.576639 | orchestrator | Monday 10 February 2025 09:14:09 +0000 (0:00:00.213) 0:00:13.827 ******* 2025-02-10 09:14:09.576797 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:09.577631 | orchestrator | 2025-02-10 09:14:09.577751 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-10 09:14:09.577946 | orchestrator | Monday 10 February 2025 09:14:09 +0000 (0:00:00.185) 0:00:14.012 ******* 2025-02-10 09:14:09.955816 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:09.956174 | orchestrator | 2025-02-10 09:14:09.956509 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-10 09:14:09.956923 | orchestrator | Monday 10 February 2025 09:14:09 +0000 (0:00:00.378) 0:00:14.390 ******* 2025-02-10 09:14:10.145145 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:14:10.145411 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:14:10.146979 | orchestrator |  "sdb": { 2025-02-10 09:14:10.147264 | orchestrator |  "osd_lvm_uuid": "f024456c-4135-5029-bf0e-13fb105dc5b7" 2025-02-10 09:14:10.151682 | orchestrator |  }, 2025-02-10 09:14:10.151928 | orchestrator |  "sdc": { 2025-02-10 09:14:10.151967 | orchestrator |  "osd_lvm_uuid": "a3ebd317-95a0-5383-a134-14be01baa44d" 2025-02-10 09:14:10.152204 | orchestrator |  } 2025-02-10 09:14:10.152543 | orchestrator |  } 2025-02-10 09:14:10.153047 | orchestrator | } 2025-02-10 09:14:10.153439 | orchestrator | 2025-02-10 09:14:10.154362 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-10 09:14:10.154512 | orchestrator | Monday 10 February 2025 09:14:10 +0000 (0:00:00.187) 0:00:14.578 ******* 2025-02-10 09:14:10.355729 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:10.356225 | orchestrator | 2025-02-10 09:14:10.356945 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-10 09:14:10.356981 | orchestrator | Monday 10 February 2025 09:14:10 +0000 (0:00:00.210) 0:00:14.788 ******* 2025-02-10 09:14:10.514121 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:10.514331 | orchestrator | 2025-02-10 09:14:10.514475 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-10 09:14:10.514563 | orchestrator | Monday 10 February 2025 09:14:10 +0000 (0:00:00.159) 0:00:14.948 ******* 2025-02-10 09:14:10.676602 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:14:10.678926 | orchestrator | 2025-02-10 09:14:10.678998 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-10 09:14:10.968457 | orchestrator | Monday 10 February 2025 09:14:10 +0000 (0:00:00.160) 0:00:15.109 ******* 2025-02-10 09:14:10.969405 | orchestrator | changed: [testbed-node-3] => { 2025-02-10 09:14:10.971608 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-10 09:14:10.971661 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:14:10.972004 | orchestrator |  "sdb": { 2025-02-10 09:14:10.973010 | orchestrator |  "osd_lvm_uuid": "f024456c-4135-5029-bf0e-13fb105dc5b7" 2025-02-10 09:14:10.975386 | orchestrator |  }, 2025-02-10 09:14:10.978429 | orchestrator |  "sdc": { 2025-02-10 09:14:10.978900 | orchestrator |  "osd_lvm_uuid": "a3ebd317-95a0-5383-a134-14be01baa44d" 2025-02-10 09:14:10.980144 | orchestrator |  } 2025-02-10 09:14:10.980454 | orchestrator |  }, 2025-02-10 09:14:10.980925 | orchestrator |  "lvm_volumes": [ 2025-02-10 09:14:10.983795 | orchestrator |  { 2025-02-10 09:14:10.984025 | orchestrator |  "data": "osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7", 2025-02-10 09:14:10.984415 | orchestrator |  "data_vg": "ceph-f024456c-4135-5029-bf0e-13fb105dc5b7" 2025-02-10 09:14:10.985030 | orchestrator |  }, 2025-02-10 09:14:10.985396 | orchestrator |  { 2025-02-10 09:14:10.986147 | orchestrator |  "data": "osd-block-a3ebd317-95a0-5383-a134-14be01baa44d", 2025-02-10 09:14:10.986477 | orchestrator |  "data_vg": "ceph-a3ebd317-95a0-5383-a134-14be01baa44d" 2025-02-10 09:14:10.986803 | orchestrator |  } 2025-02-10 09:14:10.987135 | orchestrator |  ] 2025-02-10 09:14:10.987575 | orchestrator |  } 2025-02-10 09:14:10.987782 | orchestrator | } 2025-02-10 09:14:10.988713 | orchestrator | 2025-02-10 09:14:10.988925 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-10 09:14:10.989170 | orchestrator | Monday 10 February 2025 09:14:10 +0000 (0:00:00.294) 0:00:15.403 ******* 2025-02-10 09:14:13.454639 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:14:13.456527 | orchestrator | 2025-02-10 09:14:13.456935 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-10 09:14:13.457239 | orchestrator | 2025-02-10 09:14:13.457498 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:14:13.463370 | orchestrator | Monday 10 February 2025 09:14:13 +0000 (0:00:02.486) 0:00:17.890 ******* 2025-02-10 09:14:13.756054 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-10 09:14:13.757570 | orchestrator | 2025-02-10 09:14:13.757642 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:14:13.757704 | orchestrator | Monday 10 February 2025 09:14:13 +0000 (0:00:00.298) 0:00:18.189 ******* 2025-02-10 09:14:14.022994 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:14:14.025093 | orchestrator | 2025-02-10 09:14:14.028626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:14.029104 | orchestrator | Monday 10 February 2025 09:14:14 +0000 (0:00:00.265) 0:00:18.454 ******* 2025-02-10 09:14:14.482587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:14:14.483703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:14:14.483757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:14:14.486223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:14:14.486282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:14:14.488854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:14:14.488953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:14:14.490957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:14:14.492177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-10 09:14:14.492243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:14:14.492268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:14:14.492533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:14:14.493768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:14:14.494745 | orchestrator | 2025-02-10 09:14:14.497885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:14.497997 | orchestrator | Monday 10 February 2025 09:14:14 +0000 (0:00:00.462) 0:00:18.917 ******* 2025-02-10 09:14:14.720478 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:14.720704 | orchestrator | 2025-02-10 09:14:14.721058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:14.721093 | orchestrator | Monday 10 February 2025 09:14:14 +0000 (0:00:00.235) 0:00:19.153 ******* 2025-02-10 09:14:15.064189 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:15.067134 | orchestrator | 2025-02-10 09:14:15.067409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:15.068689 | orchestrator | Monday 10 February 2025 09:14:15 +0000 (0:00:00.337) 0:00:19.490 ******* 2025-02-10 09:14:15.378785 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:16.112735 | orchestrator | 2025-02-10 09:14:16.113658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:16.113713 | orchestrator | Monday 10 February 2025 09:14:15 +0000 (0:00:00.322) 0:00:19.812 ******* 2025-02-10 09:14:16.113852 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:16.113958 | orchestrator | 2025-02-10 09:14:16.116883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:16.117125 | orchestrator | Monday 10 February 2025 09:14:16 +0000 (0:00:00.732) 0:00:20.545 ******* 2025-02-10 09:14:16.333768 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:16.334157 | orchestrator | 2025-02-10 09:14:16.334209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:16.334331 | orchestrator | Monday 10 February 2025 09:14:16 +0000 (0:00:00.218) 0:00:20.764 ******* 2025-02-10 09:14:16.606664 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:16.606882 | orchestrator | 2025-02-10 09:14:16.606913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:16.609083 | orchestrator | Monday 10 February 2025 09:14:16 +0000 (0:00:00.274) 0:00:21.038 ******* 2025-02-10 09:14:16.841956 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:16.842297 | orchestrator | 2025-02-10 09:14:17.062090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:17.062239 | orchestrator | Monday 10 February 2025 09:14:16 +0000 (0:00:00.237) 0:00:21.276 ******* 2025-02-10 09:14:17.062279 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:17.062617 | orchestrator | 2025-02-10 09:14:17.067019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:17.067796 | orchestrator | Monday 10 February 2025 09:14:17 +0000 (0:00:00.218) 0:00:21.495 ******* 2025-02-10 09:14:17.678991 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899) 2025-02-10 09:14:17.679230 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899) 2025-02-10 09:14:17.680221 | orchestrator | 2025-02-10 09:14:17.682916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:17.683680 | orchestrator | Monday 10 February 2025 09:14:17 +0000 (0:00:00.619) 0:00:22.114 ******* 2025-02-10 09:14:18.302643 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f) 2025-02-10 09:14:18.302840 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f) 2025-02-10 09:14:18.302862 | orchestrator | 2025-02-10 09:14:18.302883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:18.728958 | orchestrator | Monday 10 February 2025 09:14:18 +0000 (0:00:00.621) 0:00:22.736 ******* 2025-02-10 09:14:18.729136 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334) 2025-02-10 09:14:18.729220 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334) 2025-02-10 09:14:18.729243 | orchestrator | 2025-02-10 09:14:18.730173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:18.731123 | orchestrator | Monday 10 February 2025 09:14:18 +0000 (0:00:00.424) 0:00:23.160 ******* 2025-02-10 09:14:19.399525 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c) 2025-02-10 09:14:19.403580 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c) 2025-02-10 09:14:19.403665 | orchestrator | 2025-02-10 09:14:19.403729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:19.404807 | orchestrator | Monday 10 February 2025 09:14:19 +0000 (0:00:00.673) 0:00:23.833 ******* 2025-02-10 09:14:20.253467 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:14:20.253841 | orchestrator | 2025-02-10 09:14:20.253885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:20.254884 | orchestrator | Monday 10 February 2025 09:14:20 +0000 (0:00:00.853) 0:00:24.686 ******* 2025-02-10 09:14:20.694520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:14:20.697700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:14:20.698099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:14:20.698130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:14:20.698150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:14:20.699530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:14:20.700708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:14:20.702125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:14:20.703075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-10 09:14:20.704318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:14:20.704374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:14:20.707971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:14:20.708727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:14:20.708754 | orchestrator | 2025-02-10 09:14:20.708776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:20.709486 | orchestrator | Monday 10 February 2025 09:14:20 +0000 (0:00:00.442) 0:00:25.129 ******* 2025-02-10 09:14:20.990914 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:20.991248 | orchestrator | 2025-02-10 09:14:20.991801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:20.993059 | orchestrator | Monday 10 February 2025 09:14:20 +0000 (0:00:00.296) 0:00:25.426 ******* 2025-02-10 09:14:21.268809 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:21.269741 | orchestrator | 2025-02-10 09:14:21.270078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:21.270123 | orchestrator | Monday 10 February 2025 09:14:21 +0000 (0:00:00.276) 0:00:25.703 ******* 2025-02-10 09:14:21.486236 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:21.486524 | orchestrator | 2025-02-10 09:14:21.487711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:21.488650 | orchestrator | Monday 10 February 2025 09:14:21 +0000 (0:00:00.218) 0:00:25.921 ******* 2025-02-10 09:14:21.709240 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:21.709945 | orchestrator | 2025-02-10 09:14:21.710690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:21.714520 | orchestrator | Monday 10 February 2025 09:14:21 +0000 (0:00:00.222) 0:00:26.144 ******* 2025-02-10 09:14:22.015119 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:22.016625 | orchestrator | 2025-02-10 09:14:22.017734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:22.017773 | orchestrator | Monday 10 February 2025 09:14:22 +0000 (0:00:00.304) 0:00:26.448 ******* 2025-02-10 09:14:22.299901 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:22.300778 | orchestrator | 2025-02-10 09:14:22.300828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:22.302704 | orchestrator | Monday 10 February 2025 09:14:22 +0000 (0:00:00.283) 0:00:26.732 ******* 2025-02-10 09:14:22.517053 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:22.519930 | orchestrator | 2025-02-10 09:14:22.522415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:22.729433 | orchestrator | Monday 10 February 2025 09:14:22 +0000 (0:00:00.219) 0:00:26.951 ******* 2025-02-10 09:14:22.729563 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:22.733514 | orchestrator | 2025-02-10 09:14:22.734622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:22.735730 | orchestrator | Monday 10 February 2025 09:14:22 +0000 (0:00:00.207) 0:00:27.159 ******* 2025-02-10 09:14:23.945124 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-10 09:14:23.949630 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-10 09:14:23.949722 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-10 09:14:23.949758 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-10 09:14:23.953189 | orchestrator | 2025-02-10 09:14:23.953945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:23.954635 | orchestrator | Monday 10 February 2025 09:14:23 +0000 (0:00:01.215) 0:00:28.375 ******* 2025-02-10 09:14:24.241709 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:24.242513 | orchestrator | 2025-02-10 09:14:24.244219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:24.487021 | orchestrator | Monday 10 February 2025 09:14:24 +0000 (0:00:00.299) 0:00:28.674 ******* 2025-02-10 09:14:24.487180 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:24.487544 | orchestrator | 2025-02-10 09:14:24.488073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:24.488670 | orchestrator | Monday 10 February 2025 09:14:24 +0000 (0:00:00.247) 0:00:28.921 ******* 2025-02-10 09:14:24.698106 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:24.699733 | orchestrator | 2025-02-10 09:14:24.702927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:24.706102 | orchestrator | Monday 10 February 2025 09:14:24 +0000 (0:00:00.209) 0:00:29.131 ******* 2025-02-10 09:14:24.911508 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:24.912548 | orchestrator | 2025-02-10 09:14:24.912643 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-10 09:14:24.912889 | orchestrator | Monday 10 February 2025 09:14:24 +0000 (0:00:00.213) 0:00:29.345 ******* 2025-02-10 09:14:25.141534 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-02-10 09:14:25.142767 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-02-10 09:14:25.146191 | orchestrator | 2025-02-10 09:14:25.279670 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-10 09:14:25.279804 | orchestrator | Monday 10 February 2025 09:14:25 +0000 (0:00:00.229) 0:00:29.575 ******* 2025-02-10 09:14:25.279839 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:25.280251 | orchestrator | 2025-02-10 09:14:25.280597 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-10 09:14:25.282001 | orchestrator | Monday 10 February 2025 09:14:25 +0000 (0:00:00.140) 0:00:29.715 ******* 2025-02-10 09:14:25.465841 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:25.466500 | orchestrator | 2025-02-10 09:14:25.467050 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-10 09:14:25.467134 | orchestrator | Monday 10 February 2025 09:14:25 +0000 (0:00:00.183) 0:00:29.898 ******* 2025-02-10 09:14:25.618261 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:25.619759 | orchestrator | 2025-02-10 09:14:25.619812 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-10 09:14:25.825434 | orchestrator | Monday 10 February 2025 09:14:25 +0000 (0:00:00.154) 0:00:30.053 ******* 2025-02-10 09:14:25.825590 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:14:25.829235 | orchestrator | 2025-02-10 09:14:25.830178 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-10 09:14:25.830419 | orchestrator | Monday 10 February 2025 09:14:25 +0000 (0:00:00.207) 0:00:30.261 ******* 2025-02-10 09:14:26.059489 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f95f397-c0f5-5bc9-9af0-9f577faebed9'}}) 2025-02-10 09:14:26.059668 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204ceda1-8353-534a-a397-2ce8fe516c0b'}}) 2025-02-10 09:14:26.059731 | orchestrator | 2025-02-10 09:14:26.060467 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-10 09:14:26.061245 | orchestrator | Monday 10 February 2025 09:14:26 +0000 (0:00:00.233) 0:00:30.494 ******* 2025-02-10 09:14:26.642900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f95f397-c0f5-5bc9-9af0-9f577faebed9'}})  2025-02-10 09:14:26.643887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204ceda1-8353-534a-a397-2ce8fe516c0b'}})  2025-02-10 09:14:26.644118 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:26.644915 | orchestrator | 2025-02-10 09:14:26.646254 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-10 09:14:26.647781 | orchestrator | Monday 10 February 2025 09:14:26 +0000 (0:00:00.581) 0:00:31.075 ******* 2025-02-10 09:14:26.871664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f95f397-c0f5-5bc9-9af0-9f577faebed9'}})  2025-02-10 09:14:26.872702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204ceda1-8353-534a-a397-2ce8fe516c0b'}})  2025-02-10 09:14:26.875972 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:26.876371 | orchestrator | 2025-02-10 09:14:26.879284 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-10 09:14:26.879630 | orchestrator | Monday 10 February 2025 09:14:26 +0000 (0:00:00.231) 0:00:31.307 ******* 2025-02-10 09:14:27.116094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f95f397-c0f5-5bc9-9af0-9f577faebed9'}})  2025-02-10 09:14:27.116286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204ceda1-8353-534a-a397-2ce8fe516c0b'}})  2025-02-10 09:14:27.116318 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:27.116702 | orchestrator | 2025-02-10 09:14:27.117419 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-10 09:14:27.118001 | orchestrator | Monday 10 February 2025 09:14:27 +0000 (0:00:00.243) 0:00:31.550 ******* 2025-02-10 09:14:27.310088 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:14:27.310305 | orchestrator | 2025-02-10 09:14:27.310399 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-10 09:14:27.310423 | orchestrator | Monday 10 February 2025 09:14:27 +0000 (0:00:00.190) 0:00:31.740 ******* 2025-02-10 09:14:27.453792 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:14:27.454459 | orchestrator | 2025-02-10 09:14:27.456256 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-10 09:14:27.592317 | orchestrator | Monday 10 February 2025 09:14:27 +0000 (0:00:00.144) 0:00:31.885 ******* 2025-02-10 09:14:27.592583 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:27.594248 | orchestrator | 2025-02-10 09:14:27.595205 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-10 09:14:27.596124 | orchestrator | Monday 10 February 2025 09:14:27 +0000 (0:00:00.141) 0:00:32.027 ******* 2025-02-10 09:14:27.729905 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:27.730643 | orchestrator | 2025-02-10 09:14:27.730770 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-10 09:14:27.730880 | orchestrator | Monday 10 February 2025 09:14:27 +0000 (0:00:00.133) 0:00:32.161 ******* 2025-02-10 09:14:27.870624 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:27.871087 | orchestrator | 2025-02-10 09:14:27.871315 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-10 09:14:27.871502 | orchestrator | Monday 10 February 2025 09:14:27 +0000 (0:00:00.144) 0:00:32.306 ******* 2025-02-10 09:14:28.030622 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:14:28.033517 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:14:28.034927 | orchestrator |  "sdb": { 2025-02-10 09:14:28.035579 | orchestrator |  "osd_lvm_uuid": "8f95f397-c0f5-5bc9-9af0-9f577faebed9" 2025-02-10 09:14:28.035612 | orchestrator |  }, 2025-02-10 09:14:28.035629 | orchestrator |  "sdc": { 2025-02-10 09:14:28.035645 | orchestrator |  "osd_lvm_uuid": "204ceda1-8353-534a-a397-2ce8fe516c0b" 2025-02-10 09:14:28.035661 | orchestrator |  } 2025-02-10 09:14:28.035680 | orchestrator |  } 2025-02-10 09:14:28.035702 | orchestrator | } 2025-02-10 09:14:28.035839 | orchestrator | 2025-02-10 09:14:28.036142 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-10 09:14:28.036468 | orchestrator | Monday 10 February 2025 09:14:28 +0000 (0:00:00.157) 0:00:32.463 ******* 2025-02-10 09:14:28.184985 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:28.185198 | orchestrator | 2025-02-10 09:14:28.185230 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-10 09:14:28.185296 | orchestrator | Monday 10 February 2025 09:14:28 +0000 (0:00:00.154) 0:00:32.618 ******* 2025-02-10 09:14:28.329719 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:28.330241 | orchestrator | 2025-02-10 09:14:28.330544 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-10 09:14:28.330593 | orchestrator | Monday 10 February 2025 09:14:28 +0000 (0:00:00.147) 0:00:32.765 ******* 2025-02-10 09:14:28.465772 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:14:28.466234 | orchestrator | 2025-02-10 09:14:28.466676 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-10 09:14:28.467465 | orchestrator | Monday 10 February 2025 09:14:28 +0000 (0:00:00.135) 0:00:32.900 ******* 2025-02-10 09:14:29.018202 | orchestrator | changed: [testbed-node-4] => { 2025-02-10 09:14:29.020530 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-10 09:14:29.021844 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:14:29.021904 | orchestrator |  "sdb": { 2025-02-10 09:14:29.021942 | orchestrator |  "osd_lvm_uuid": "8f95f397-c0f5-5bc9-9af0-9f577faebed9" 2025-02-10 09:14:29.022883 | orchestrator |  }, 2025-02-10 09:14:29.024697 | orchestrator |  "sdc": { 2025-02-10 09:14:29.026062 | orchestrator |  "osd_lvm_uuid": "204ceda1-8353-534a-a397-2ce8fe516c0b" 2025-02-10 09:14:29.026502 | orchestrator |  } 2025-02-10 09:14:29.027067 | orchestrator |  }, 2025-02-10 09:14:29.028523 | orchestrator |  "lvm_volumes": [ 2025-02-10 09:14:29.029087 | orchestrator |  { 2025-02-10 09:14:29.030489 | orchestrator |  "data": "osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9", 2025-02-10 09:14:29.030823 | orchestrator |  "data_vg": "ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9" 2025-02-10 09:14:29.031297 | orchestrator |  }, 2025-02-10 09:14:29.031873 | orchestrator |  { 2025-02-10 09:14:29.032385 | orchestrator |  "data": "osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b", 2025-02-10 09:14:29.035297 | orchestrator |  "data_vg": "ceph-204ceda1-8353-534a-a397-2ce8fe516c0b" 2025-02-10 09:14:29.035907 | orchestrator |  } 2025-02-10 09:14:29.035937 | orchestrator |  ] 2025-02-10 09:14:29.035952 | orchestrator |  } 2025-02-10 09:14:29.035966 | orchestrator | } 2025-02-10 09:14:29.035981 | orchestrator | 2025-02-10 09:14:29.036001 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-10 09:14:29.037030 | orchestrator | Monday 10 February 2025 09:14:29 +0000 (0:00:00.551) 0:00:33.451 ******* 2025-02-10 09:14:30.622751 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-10 09:14:30.622977 | orchestrator | 2025-02-10 09:14:30.623016 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-10 09:14:30.623130 | orchestrator | 2025-02-10 09:14:30.623195 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:14:30.623217 | orchestrator | Monday 10 February 2025 09:14:30 +0000 (0:00:01.599) 0:00:35.051 ******* 2025-02-10 09:14:30.884278 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-10 09:14:30.884434 | orchestrator | 2025-02-10 09:14:30.884449 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:14:30.886495 | orchestrator | Monday 10 February 2025 09:14:30 +0000 (0:00:00.266) 0:00:35.317 ******* 2025-02-10 09:14:31.623287 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:14:31.623821 | orchestrator | 2025-02-10 09:14:31.624308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:31.624383 | orchestrator | Monday 10 February 2025 09:14:31 +0000 (0:00:00.737) 0:00:36.056 ******* 2025-02-10 09:14:32.064803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:14:32.065896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:14:32.065947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:14:32.066532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:14:32.067124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:14:32.069626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:14:32.070418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:14:32.070741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:14:32.071056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-10 09:14:32.071466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:14:32.071753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:14:32.072133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:14:32.072323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:14:32.072664 | orchestrator | 2025-02-10 09:14:32.073722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:32.307047 | orchestrator | Monday 10 February 2025 09:14:32 +0000 (0:00:00.443) 0:00:36.499 ******* 2025-02-10 09:14:32.307200 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:32.308162 | orchestrator | 2025-02-10 09:14:32.309694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:32.313140 | orchestrator | Monday 10 February 2025 09:14:32 +0000 (0:00:00.241) 0:00:36.740 ******* 2025-02-10 09:14:32.547319 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:32.547549 | orchestrator | 2025-02-10 09:14:32.547572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:32.547594 | orchestrator | Monday 10 February 2025 09:14:32 +0000 (0:00:00.239) 0:00:36.980 ******* 2025-02-10 09:14:32.765071 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:32.765777 | orchestrator | 2025-02-10 09:14:32.766614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:32.767471 | orchestrator | Monday 10 February 2025 09:14:32 +0000 (0:00:00.219) 0:00:37.200 ******* 2025-02-10 09:14:33.032018 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:33.033033 | orchestrator | 2025-02-10 09:14:33.035323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:33.036276 | orchestrator | Monday 10 February 2025 09:14:33 +0000 (0:00:00.265) 0:00:37.465 ******* 2025-02-10 09:14:33.262475 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:33.263012 | orchestrator | 2025-02-10 09:14:33.267779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:33.268473 | orchestrator | Monday 10 February 2025 09:14:33 +0000 (0:00:00.229) 0:00:37.695 ******* 2025-02-10 09:14:33.499253 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:33.500046 | orchestrator | 2025-02-10 09:14:33.501500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:33.502685 | orchestrator | Monday 10 February 2025 09:14:33 +0000 (0:00:00.237) 0:00:37.933 ******* 2025-02-10 09:14:33.757676 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:33.757970 | orchestrator | 2025-02-10 09:14:33.762095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:33.763067 | orchestrator | Monday 10 February 2025 09:14:33 +0000 (0:00:00.256) 0:00:38.190 ******* 2025-02-10 09:14:33.986255 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:33.986494 | orchestrator | 2025-02-10 09:14:33.986656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:33.987096 | orchestrator | Monday 10 February 2025 09:14:33 +0000 (0:00:00.231) 0:00:38.421 ******* 2025-02-10 09:14:34.917899 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642) 2025-02-10 09:14:34.919763 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642) 2025-02-10 09:14:34.919820 | orchestrator | 2025-02-10 09:14:34.920332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:34.920392 | orchestrator | Monday 10 February 2025 09:14:34 +0000 (0:00:00.931) 0:00:39.352 ******* 2025-02-10 09:14:35.496169 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06) 2025-02-10 09:14:35.497151 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06) 2025-02-10 09:14:35.497754 | orchestrator | 2025-02-10 09:14:35.498428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:35.498857 | orchestrator | Monday 10 February 2025 09:14:35 +0000 (0:00:00.577) 0:00:39.929 ******* 2025-02-10 09:14:35.983357 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a) 2025-02-10 09:14:35.983553 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a) 2025-02-10 09:14:35.983860 | orchestrator | 2025-02-10 09:14:35.984945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:35.985501 | orchestrator | Monday 10 February 2025 09:14:35 +0000 (0:00:00.488) 0:00:40.418 ******* 2025-02-10 09:14:36.531061 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92) 2025-02-10 09:14:36.531272 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92) 2025-02-10 09:14:36.531302 | orchestrator | 2025-02-10 09:14:36.531499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:14:36.531529 | orchestrator | Monday 10 February 2025 09:14:36 +0000 (0:00:00.545) 0:00:40.963 ******* 2025-02-10 09:14:36.891189 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:14:36.894517 | orchestrator | 2025-02-10 09:14:36.895072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:36.895121 | orchestrator | Monday 10 February 2025 09:14:36 +0000 (0:00:00.360) 0:00:41.324 ******* 2025-02-10 09:14:37.291759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:14:37.292667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:14:37.296710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:14:37.298010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:14:37.298920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:14:37.300892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:14:37.302137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:14:37.302910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:14:37.303577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-10 09:14:37.304299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:14:37.305595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:14:37.306849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:14:37.307324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:14:37.308370 | orchestrator | 2025-02-10 09:14:37.309175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:37.310113 | orchestrator | Monday 10 February 2025 09:14:37 +0000 (0:00:00.400) 0:00:41.725 ******* 2025-02-10 09:14:37.530284 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:37.534191 | orchestrator | 2025-02-10 09:14:37.534395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:37.536163 | orchestrator | Monday 10 February 2025 09:14:37 +0000 (0:00:00.237) 0:00:41.963 ******* 2025-02-10 09:14:37.736952 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:37.737723 | orchestrator | 2025-02-10 09:14:37.738843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:37.739582 | orchestrator | Monday 10 February 2025 09:14:37 +0000 (0:00:00.207) 0:00:42.170 ******* 2025-02-10 09:14:37.957025 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:37.957508 | orchestrator | 2025-02-10 09:14:37.958892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:37.962651 | orchestrator | Monday 10 February 2025 09:14:37 +0000 (0:00:00.220) 0:00:42.391 ******* 2025-02-10 09:14:38.734460 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:38.734756 | orchestrator | 2025-02-10 09:14:38.734814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:38.735049 | orchestrator | Monday 10 February 2025 09:14:38 +0000 (0:00:00.777) 0:00:43.168 ******* 2025-02-10 09:14:38.977381 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:38.977736 | orchestrator | 2025-02-10 09:14:38.977788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:38.978641 | orchestrator | Monday 10 February 2025 09:14:38 +0000 (0:00:00.242) 0:00:43.411 ******* 2025-02-10 09:14:39.203990 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:39.205200 | orchestrator | 2025-02-10 09:14:39.205315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:39.205408 | orchestrator | Monday 10 February 2025 09:14:39 +0000 (0:00:00.225) 0:00:43.637 ******* 2025-02-10 09:14:39.451816 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:39.452022 | orchestrator | 2025-02-10 09:14:39.452534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:39.455615 | orchestrator | Monday 10 February 2025 09:14:39 +0000 (0:00:00.247) 0:00:43.885 ******* 2025-02-10 09:14:39.667838 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:39.668789 | orchestrator | 2025-02-10 09:14:39.671132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:39.671214 | orchestrator | Monday 10 February 2025 09:14:39 +0000 (0:00:00.215) 0:00:44.101 ******* 2025-02-10 09:14:40.382947 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-10 09:14:40.383326 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-10 09:14:40.383641 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-10 09:14:40.385275 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-10 09:14:40.610650 | orchestrator | 2025-02-10 09:14:40.610821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:40.610850 | orchestrator | Monday 10 February 2025 09:14:40 +0000 (0:00:00.714) 0:00:44.816 ******* 2025-02-10 09:14:40.610910 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:40.610974 | orchestrator | 2025-02-10 09:14:40.611584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:40.612354 | orchestrator | Monday 10 February 2025 09:14:40 +0000 (0:00:00.227) 0:00:45.044 ******* 2025-02-10 09:14:40.831450 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:40.835842 | orchestrator | 2025-02-10 09:14:41.029776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:41.029933 | orchestrator | Monday 10 February 2025 09:14:40 +0000 (0:00:00.221) 0:00:45.265 ******* 2025-02-10 09:14:41.030123 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:41.030222 | orchestrator | 2025-02-10 09:14:41.030704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:14:41.031412 | orchestrator | Monday 10 February 2025 09:14:41 +0000 (0:00:00.198) 0:00:45.463 ******* 2025-02-10 09:14:41.244451 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:41.244763 | orchestrator | 2025-02-10 09:14:41.246158 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-10 09:14:41.246829 | orchestrator | Monday 10 February 2025 09:14:41 +0000 (0:00:00.214) 0:00:45.678 ******* 2025-02-10 09:14:41.713967 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-02-10 09:14:41.714669 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-02-10 09:14:41.719502 | orchestrator | 2025-02-10 09:14:41.720506 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-10 09:14:41.720618 | orchestrator | Monday 10 February 2025 09:14:41 +0000 (0:00:00.467) 0:00:46.145 ******* 2025-02-10 09:14:41.883780 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:41.884931 | orchestrator | 2025-02-10 09:14:41.886866 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-10 09:14:42.055045 | orchestrator | Monday 10 February 2025 09:14:41 +0000 (0:00:00.171) 0:00:46.317 ******* 2025-02-10 09:14:42.055213 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:42.055276 | orchestrator | 2025-02-10 09:14:42.055295 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-10 09:14:42.055755 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.172) 0:00:46.490 ******* 2025-02-10 09:14:42.227063 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:42.228612 | orchestrator | 2025-02-10 09:14:42.229074 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-10 09:14:42.233448 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.171) 0:00:46.661 ******* 2025-02-10 09:14:42.381739 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:14:42.382848 | orchestrator | 2025-02-10 09:14:42.382893 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-10 09:14:42.383941 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.155) 0:00:46.816 ******* 2025-02-10 09:14:42.584188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c468f1bf-17d5-510b-8602-ed8efc51f14c'}}) 2025-02-10 09:14:42.584490 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}}) 2025-02-10 09:14:42.584865 | orchestrator | 2025-02-10 09:14:42.585952 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-10 09:14:42.586231 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.201) 0:00:47.018 ******* 2025-02-10 09:14:42.766245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c468f1bf-17d5-510b-8602-ed8efc51f14c'}})  2025-02-10 09:14:42.767449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}})  2025-02-10 09:14:42.768861 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:42.769699 | orchestrator | 2025-02-10 09:14:42.772319 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-10 09:14:42.949972 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.182) 0:00:47.200 ******* 2025-02-10 09:14:42.950190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c468f1bf-17d5-510b-8602-ed8efc51f14c'}})  2025-02-10 09:14:42.951220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}})  2025-02-10 09:14:42.951263 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:42.952591 | orchestrator | 2025-02-10 09:14:42.952688 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-10 09:14:42.953612 | orchestrator | Monday 10 February 2025 09:14:42 +0000 (0:00:00.181) 0:00:47.382 ******* 2025-02-10 09:14:43.162803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c468f1bf-17d5-510b-8602-ed8efc51f14c'}})  2025-02-10 09:14:43.163006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}})  2025-02-10 09:14:43.163559 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:43.164323 | orchestrator | 2025-02-10 09:14:43.168055 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-10 09:14:43.344643 | orchestrator | Monday 10 February 2025 09:14:43 +0000 (0:00:00.211) 0:00:47.593 ******* 2025-02-10 09:14:43.344805 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:14:43.345228 | orchestrator | 2025-02-10 09:14:43.346446 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-10 09:14:43.347141 | orchestrator | Monday 10 February 2025 09:14:43 +0000 (0:00:00.185) 0:00:47.779 ******* 2025-02-10 09:14:43.503125 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:14:43.504148 | orchestrator | 2025-02-10 09:14:43.506709 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-10 09:14:43.506947 | orchestrator | Monday 10 February 2025 09:14:43 +0000 (0:00:00.157) 0:00:47.936 ******* 2025-02-10 09:14:43.659031 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.033572 | orchestrator | 2025-02-10 09:14:44.033733 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-10 09:14:44.033789 | orchestrator | Monday 10 February 2025 09:14:43 +0000 (0:00:00.152) 0:00:48.088 ******* 2025-02-10 09:14:44.033836 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.035069 | orchestrator | 2025-02-10 09:14:44.035118 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-10 09:14:44.035866 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.375) 0:00:48.464 ******* 2025-02-10 09:14:44.183390 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.184257 | orchestrator | 2025-02-10 09:14:44.187673 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-10 09:14:44.188510 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.150) 0:00:48.615 ******* 2025-02-10 09:14:44.329745 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:14:44.330074 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:14:44.330116 | orchestrator |  "sdb": { 2025-02-10 09:14:44.331003 | orchestrator |  "osd_lvm_uuid": "c468f1bf-17d5-510b-8602-ed8efc51f14c" 2025-02-10 09:14:44.334276 | orchestrator |  }, 2025-02-10 09:14:44.334457 | orchestrator |  "sdc": { 2025-02-10 09:14:44.334486 | orchestrator |  "osd_lvm_uuid": "9b75c92e-4993-5ff3-a16a-a182a58c3e6b" 2025-02-10 09:14:44.334508 | orchestrator |  } 2025-02-10 09:14:44.335902 | orchestrator |  } 2025-02-10 09:14:44.336106 | orchestrator | } 2025-02-10 09:14:44.336444 | orchestrator | 2025-02-10 09:14:44.336953 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-10 09:14:44.337620 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.148) 0:00:48.763 ******* 2025-02-10 09:14:44.483519 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.484296 | orchestrator | 2025-02-10 09:14:44.488427 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-10 09:14:44.488757 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.151) 0:00:48.915 ******* 2025-02-10 09:14:44.658828 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.659384 | orchestrator | 2025-02-10 09:14:44.659428 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-10 09:14:44.659468 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.177) 0:00:49.092 ******* 2025-02-10 09:14:44.805469 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:14:44.805897 | orchestrator | 2025-02-10 09:14:44.809218 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-10 09:14:44.809407 | orchestrator | Monday 10 February 2025 09:14:44 +0000 (0:00:00.146) 0:00:49.239 ******* 2025-02-10 09:14:45.111504 | orchestrator | changed: [testbed-node-5] => { 2025-02-10 09:14:45.111918 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-10 09:14:45.111969 | orchestrator |  "ceph_osd_devices": { 2025-02-10 09:14:45.112328 | orchestrator |  "sdb": { 2025-02-10 09:14:45.112913 | orchestrator |  "osd_lvm_uuid": "c468f1bf-17d5-510b-8602-ed8efc51f14c" 2025-02-10 09:14:45.113372 | orchestrator |  }, 2025-02-10 09:14:45.113588 | orchestrator |  "sdc": { 2025-02-10 09:14:45.113904 | orchestrator |  "osd_lvm_uuid": "9b75c92e-4993-5ff3-a16a-a182a58c3e6b" 2025-02-10 09:14:45.114234 | orchestrator |  } 2025-02-10 09:14:45.114442 | orchestrator |  }, 2025-02-10 09:14:45.114596 | orchestrator |  "lvm_volumes": [ 2025-02-10 09:14:45.114830 | orchestrator |  { 2025-02-10 09:14:45.115053 | orchestrator |  "data": "osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c", 2025-02-10 09:14:45.115697 | orchestrator |  "data_vg": "ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c" 2025-02-10 09:14:45.117556 | orchestrator |  }, 2025-02-10 09:14:45.118717 | orchestrator |  { 2025-02-10 09:14:45.119195 | orchestrator |  "data": "osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b", 2025-02-10 09:14:45.120251 | orchestrator |  "data_vg": "ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b" 2025-02-10 09:14:45.120857 | orchestrator |  } 2025-02-10 09:14:45.121360 | orchestrator |  ] 2025-02-10 09:14:45.121660 | orchestrator |  } 2025-02-10 09:14:45.122237 | orchestrator | } 2025-02-10 09:14:45.122623 | orchestrator | 2025-02-10 09:14:45.123120 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-10 09:14:45.123741 | orchestrator | Monday 10 February 2025 09:14:45 +0000 (0:00:00.307) 0:00:49.547 ******* 2025-02-10 09:14:46.522173 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-10 09:14:46.522480 | orchestrator | 2025-02-10 09:14:46.522518 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:14:46.522939 | orchestrator | 2025-02-10 09:14:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:14:46.523539 | orchestrator | 2025-02-10 09:14:46 | INFO  | Please wait and do not abort execution. 2025-02-10 09:14:46.523574 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-10 09:14:46.524648 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-10 09:14:46.524957 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-10 09:14:46.525674 | orchestrator | 2025-02-10 09:14:46.525870 | orchestrator | 2025-02-10 09:14:46.526522 | orchestrator | 2025-02-10 09:14:46.526786 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:14:46.526939 | orchestrator | Monday 10 February 2025 09:14:46 +0000 (0:00:01.407) 0:00:50.954 ******* 2025-02-10 09:14:46.527444 | orchestrator | =============================================================================== 2025-02-10 09:14:46.527691 | orchestrator | Write configuration file ------------------------------------------------ 5.49s 2025-02-10 09:14:46.531465 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2025-02-10 09:14:46.531669 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-02-10 09:14:46.531991 | orchestrator | Get initial list of available block devices ----------------------------- 1.24s 2025-02-10 09:14:46.532211 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2025-02-10 09:14:46.535098 | orchestrator | Print configuration data ------------------------------------------------ 1.15s 2025-02-10 09:14:46.535272 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2025-02-10 09:14:46.535298 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.93s 2025-02-10 09:14:46.535314 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2025-02-10 09:14:46.535329 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.92s 2025-02-10 09:14:46.535380 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2025-02-10 09:14:46.535395 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-02-10 09:14:46.535411 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-02-10 09:14:46.535436 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-02-10 09:14:46.535452 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-02-10 09:14:46.535484 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-02-10 09:14:46.535506 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.70s 2025-02-10 09:14:46.535585 | orchestrator | Set WAL devices config data --------------------------------------------- 0.69s 2025-02-10 09:14:46.535926 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.67s 2025-02-10 09:14:46.536046 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-02-10 09:14:48.840739 | orchestrator | 2025-02-10 09:14:48 | INFO  | Task afbcbf3b-c6ea-4e32-9175-039b9f302d21 is running in background. Output coming soon. 2025-02-10 09:15:25.836763 | orchestrator | 2025-02-10 09:15:16 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-02-10 09:15:27.567187 | orchestrator | 2025-02-10 09:15:16 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-02-10 09:15:27.567312 | orchestrator | 2025-02-10 09:15:16 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-02-10 09:15:27.567330 | orchestrator | 2025-02-10 09:15:17 | INFO  | Handling group overwrites in 99-overwrite 2025-02-10 09:15:27.567403 | orchestrator | 2025-02-10 09:15:17 | INFO  | Removing group ceph-mds from 50-ceph 2025-02-10 09:15:27.567436 | orchestrator | 2025-02-10 09:15:17 | INFO  | Removing group ceph-rgw from 50-ceph 2025-02-10 09:15:27.567451 | orchestrator | 2025-02-10 09:15:17 | INFO  | Removing group netbird:children from 50-infrastruture 2025-02-10 09:15:27.567466 | orchestrator | 2025-02-10 09:15:17 | INFO  | Removing group storage:children from 50-kolla 2025-02-10 09:15:27.567480 | orchestrator | 2025-02-10 09:15:17 | INFO  | Removing group frr:children from 60-generic 2025-02-10 09:15:27.567494 | orchestrator | 2025-02-10 09:15:17 | INFO  | Handling group overwrites in 20-roles 2025-02-10 09:15:27.567508 | orchestrator | 2025-02-10 09:15:17 | INFO  | Removing group k3s_node from 50-infrastruture 2025-02-10 09:15:27.567615 | orchestrator | 2025-02-10 09:15:18 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-02-10 09:15:27.567630 | orchestrator | 2025-02-10 09:15:25 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-02-10 09:15:27.567663 | orchestrator | 2025-02-10 09:15:27 | INFO  | Task 650e086b-a254-492a-9272-80599b4191ec (ceph-create-lvm-devices) was prepared for execution. 2025-02-10 09:15:30.574317 | orchestrator | 2025-02-10 09:15:27 | INFO  | It takes a moment until task 650e086b-a254-492a-9272-80599b4191ec (ceph-create-lvm-devices) has been started and output is visible here. 2025-02-10 09:15:30.574516 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:15:31.039430 | orchestrator | 2025-02-10 09:15:31.040792 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-10 09:15:31.043301 | orchestrator | 2025-02-10 09:15:31.308662 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:15:31.308718 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.402) 0:00:00.402 ******* 2025-02-10 09:15:31.308735 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:31.311079 | orchestrator | 2025-02-10 09:15:31.532897 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:15:31.533068 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.269) 0:00:00.672 ******* 2025-02-10 09:15:31.533108 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:31.533228 | orchestrator | 2025-02-10 09:15:31.533289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:31.533579 | orchestrator | Monday 10 February 2025 09:15:31 +0000 (0:00:00.224) 0:00:00.896 ******* 2025-02-10 09:15:32.183030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:15:32.183223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:15:32.184245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:15:32.184757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:15:32.185715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:15:32.185859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:15:32.186294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:15:32.186706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:15:32.187030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-10 09:15:32.187434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:15:32.187850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:15:32.187985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:15:32.188430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:15:32.188733 | orchestrator | 2025-02-10 09:15:32.189206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:32.189378 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.650) 0:00:01.547 ******* 2025-02-10 09:15:32.365740 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:32.365928 | orchestrator | 2025-02-10 09:15:32.365960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:32.372572 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.181) 0:00:01.728 ******* 2025-02-10 09:15:32.566288 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:32.566806 | orchestrator | 2025-02-10 09:15:32.570392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:32.771715 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.199) 0:00:01.928 ******* 2025-02-10 09:15:32.771845 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:32.774091 | orchestrator | 2025-02-10 09:15:32.774829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:32.775774 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.206) 0:00:02.135 ******* 2025-02-10 09:15:33.000367 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:33.001379 | orchestrator | 2025-02-10 09:15:33.001783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.001887 | orchestrator | Monday 10 February 2025 09:15:32 +0000 (0:00:00.228) 0:00:02.364 ******* 2025-02-10 09:15:33.204022 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:33.204515 | orchestrator | 2025-02-10 09:15:33.205474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.206646 | orchestrator | Monday 10 February 2025 09:15:33 +0000 (0:00:00.203) 0:00:02.567 ******* 2025-02-10 09:15:33.437070 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:33.437347 | orchestrator | 2025-02-10 09:15:33.437866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.438418 | orchestrator | Monday 10 February 2025 09:15:33 +0000 (0:00:00.228) 0:00:02.796 ******* 2025-02-10 09:15:33.640584 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:33.640750 | orchestrator | 2025-02-10 09:15:33.642717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.643228 | orchestrator | Monday 10 February 2025 09:15:33 +0000 (0:00:00.206) 0:00:03.002 ******* 2025-02-10 09:15:33.848523 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:33.849333 | orchestrator | 2025-02-10 09:15:33.849410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:33.849440 | orchestrator | Monday 10 February 2025 09:15:33 +0000 (0:00:00.206) 0:00:03.209 ******* 2025-02-10 09:15:34.745847 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84) 2025-02-10 09:15:34.747012 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84) 2025-02-10 09:15:34.748510 | orchestrator | 2025-02-10 09:15:34.749763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:34.750973 | orchestrator | Monday 10 February 2025 09:15:34 +0000 (0:00:00.897) 0:00:04.107 ******* 2025-02-10 09:15:35.207438 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598) 2025-02-10 09:15:35.208251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598) 2025-02-10 09:15:35.208297 | orchestrator | 2025-02-10 09:15:35.212252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:35.662298 | orchestrator | Monday 10 February 2025 09:15:35 +0000 (0:00:00.463) 0:00:04.570 ******* 2025-02-10 09:15:35.662514 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a) 2025-02-10 09:15:35.665151 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a) 2025-02-10 09:15:36.133163 | orchestrator | 2025-02-10 09:15:36.133278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:36.133291 | orchestrator | Monday 10 February 2025 09:15:35 +0000 (0:00:00.453) 0:00:05.023 ******* 2025-02-10 09:15:36.133314 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51) 2025-02-10 09:15:36.134109 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51) 2025-02-10 09:15:36.134836 | orchestrator | 2025-02-10 09:15:36.138263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:36.480473 | orchestrator | Monday 10 February 2025 09:15:36 +0000 (0:00:00.472) 0:00:05.496 ******* 2025-02-10 09:15:36.480632 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:15:36.480711 | orchestrator | 2025-02-10 09:15:36.481477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:36.482141 | orchestrator | Monday 10 February 2025 09:15:36 +0000 (0:00:00.345) 0:00:05.842 ******* 2025-02-10 09:15:36.945840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-10 09:15:36.945993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-10 09:15:36.946736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-10 09:15:36.951171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-10 09:15:36.951328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-10 09:15:36.952016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-10 09:15:36.952383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-10 09:15:36.953508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-10 09:15:36.954132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-10 09:15:36.954817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-10 09:15:36.955516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-10 09:15:36.956018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-10 09:15:36.956617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-10 09:15:36.956785 | orchestrator | 2025-02-10 09:15:36.957415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:36.958197 | orchestrator | Monday 10 February 2025 09:15:36 +0000 (0:00:00.467) 0:00:06.309 ******* 2025-02-10 09:15:37.157727 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:37.161072 | orchestrator | 2025-02-10 09:15:37.161155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.161755 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.209) 0:00:06.519 ******* 2025-02-10 09:15:37.366997 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:37.370660 | orchestrator | 2025-02-10 09:15:37.370735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.371075 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.210) 0:00:06.729 ******* 2025-02-10 09:15:37.580699 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:37.581508 | orchestrator | 2025-02-10 09:15:37.585004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.585447 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.212) 0:00:06.942 ******* 2025-02-10 09:15:37.776066 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:37.778242 | orchestrator | 2025-02-10 09:15:37.781170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:37.782210 | orchestrator | Monday 10 February 2025 09:15:37 +0000 (0:00:00.197) 0:00:07.139 ******* 2025-02-10 09:15:38.282780 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:38.285626 | orchestrator | 2025-02-10 09:15:38.286498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.286693 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.503) 0:00:07.643 ******* 2025-02-10 09:15:38.481657 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:38.482548 | orchestrator | 2025-02-10 09:15:38.483308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.484161 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.199) 0:00:07.843 ******* 2025-02-10 09:15:38.704522 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:38.705093 | orchestrator | 2025-02-10 09:15:38.705927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.706459 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.224) 0:00:08.068 ******* 2025-02-10 09:15:38.915140 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:38.915531 | orchestrator | 2025-02-10 09:15:38.915985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:38.916470 | orchestrator | Monday 10 February 2025 09:15:38 +0000 (0:00:00.210) 0:00:08.278 ******* 2025-02-10 09:15:39.549554 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-10 09:15:39.549727 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-10 09:15:39.549749 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-10 09:15:39.549768 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-10 09:15:39.550120 | orchestrator | 2025-02-10 09:15:39.550280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:39.550310 | orchestrator | Monday 10 February 2025 09:15:39 +0000 (0:00:00.632) 0:00:08.911 ******* 2025-02-10 09:15:39.778119 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:39.778255 | orchestrator | 2025-02-10 09:15:39.778658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:39.779048 | orchestrator | Monday 10 February 2025 09:15:39 +0000 (0:00:00.229) 0:00:09.141 ******* 2025-02-10 09:15:39.973424 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:39.973629 | orchestrator | 2025-02-10 09:15:39.974161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:39.975084 | orchestrator | Monday 10 February 2025 09:15:39 +0000 (0:00:00.195) 0:00:09.336 ******* 2025-02-10 09:15:40.183253 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:40.184229 | orchestrator | 2025-02-10 09:15:40.184663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:15:40.185872 | orchestrator | Monday 10 February 2025 09:15:40 +0000 (0:00:00.208) 0:00:09.545 ******* 2025-02-10 09:15:40.398795 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:40.399193 | orchestrator | 2025-02-10 09:15:40.399628 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-10 09:15:40.401158 | orchestrator | Monday 10 February 2025 09:15:40 +0000 (0:00:00.215) 0:00:09.760 ******* 2025-02-10 09:15:40.536275 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:40.536615 | orchestrator | 2025-02-10 09:15:40.538097 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-10 09:15:40.538423 | orchestrator | Monday 10 February 2025 09:15:40 +0000 (0:00:00.137) 0:00:09.898 ******* 2025-02-10 09:15:40.953636 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f024456c-4135-5029-bf0e-13fb105dc5b7'}}) 2025-02-10 09:15:40.954814 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a3ebd317-95a0-5383-a134-14be01baa44d'}}) 2025-02-10 09:15:40.955869 | orchestrator | 2025-02-10 09:15:40.957546 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-10 09:15:40.957997 | orchestrator | Monday 10 February 2025 09:15:40 +0000 (0:00:00.418) 0:00:10.316 ******* 2025-02-10 09:15:43.031958 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'}) 2025-02-10 09:15:43.032600 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'}) 2025-02-10 09:15:43.032666 | orchestrator | 2025-02-10 09:15:43.033403 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-10 09:15:43.034129 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:02.073) 0:00:12.389 ******* 2025-02-10 09:15:43.199715 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:43.199897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:43.199927 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:43.200516 | orchestrator | 2025-02-10 09:15:43.201148 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-10 09:15:43.201661 | orchestrator | Monday 10 February 2025 09:15:43 +0000 (0:00:00.172) 0:00:12.562 ******* 2025-02-10 09:15:44.717828 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'}) 2025-02-10 09:15:44.717948 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'}) 2025-02-10 09:15:44.718937 | orchestrator | 2025-02-10 09:15:44.720060 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-10 09:15:44.720784 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:01.517) 0:00:14.079 ******* 2025-02-10 09:15:44.887488 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:44.888235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:44.889046 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:44.890161 | orchestrator | 2025-02-10 09:15:44.890722 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-10 09:15:44.891423 | orchestrator | Monday 10 February 2025 09:15:44 +0000 (0:00:00.171) 0:00:14.251 ******* 2025-02-10 09:15:45.025614 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:45.026540 | orchestrator | 2025-02-10 09:15:45.026656 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-10 09:15:45.191292 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.136) 0:00:14.387 ******* 2025-02-10 09:15:45.191519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:45.191784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:45.192981 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:45.193719 | orchestrator | 2025-02-10 09:15:45.194587 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-10 09:15:45.195437 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.164) 0:00:14.552 ******* 2025-02-10 09:15:45.364509 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:45.364793 | orchestrator | 2025-02-10 09:15:45.365842 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-10 09:15:45.366700 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.174) 0:00:14.726 ******* 2025-02-10 09:15:45.553069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:45.553645 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:45.554702 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:45.555353 | orchestrator | 2025-02-10 09:15:45.558219 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-10 09:15:45.558781 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.190) 0:00:14.916 ******* 2025-02-10 09:15:45.872812 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:45.872999 | orchestrator | 2025-02-10 09:15:45.873910 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-10 09:15:45.874272 | orchestrator | Monday 10 February 2025 09:15:45 +0000 (0:00:00.319) 0:00:15.236 ******* 2025-02-10 09:15:46.049675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:46.050762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:46.051747 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:46.052223 | orchestrator | 2025-02-10 09:15:46.052766 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-10 09:15:46.053240 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:00.176) 0:00:15.412 ******* 2025-02-10 09:15:46.199042 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:46.199227 | orchestrator | 2025-02-10 09:15:46.199255 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-10 09:15:46.200541 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:00.147) 0:00:15.560 ******* 2025-02-10 09:15:46.367011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:46.367781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:46.368441 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:46.368493 | orchestrator | 2025-02-10 09:15:46.369522 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-10 09:15:46.370250 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:00.170) 0:00:15.730 ******* 2025-02-10 09:15:46.523687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:46.524459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:46.527771 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:46.527876 | orchestrator | 2025-02-10 09:15:46.528255 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-10 09:15:46.528848 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:00.156) 0:00:15.887 ******* 2025-02-10 09:15:46.725871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:46.726603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:46.727271 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:46.727689 | orchestrator | 2025-02-10 09:15:46.729674 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-10 09:15:46.730705 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:00.202) 0:00:16.089 ******* 2025-02-10 09:15:46.864511 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:46.865811 | orchestrator | 2025-02-10 09:15:46.868872 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-10 09:15:46.868939 | orchestrator | Monday 10 February 2025 09:15:46 +0000 (0:00:00.137) 0:00:16.227 ******* 2025-02-10 09:15:47.024012 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:47.024202 | orchestrator | 2025-02-10 09:15:47.024510 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-10 09:15:47.025290 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.160) 0:00:16.387 ******* 2025-02-10 09:15:47.175788 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:47.177686 | orchestrator | 2025-02-10 09:15:47.178121 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-10 09:15:47.179446 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.149) 0:00:16.537 ******* 2025-02-10 09:15:47.319862 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:15:47.320111 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-10 09:15:47.321563 | orchestrator | } 2025-02-10 09:15:47.323753 | orchestrator | 2025-02-10 09:15:47.324478 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-10 09:15:47.324880 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.145) 0:00:16.682 ******* 2025-02-10 09:15:47.486421 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:15:47.486674 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-10 09:15:47.488185 | orchestrator | } 2025-02-10 09:15:47.489083 | orchestrator | 2025-02-10 09:15:47.489926 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-10 09:15:47.490358 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.165) 0:00:16.848 ******* 2025-02-10 09:15:47.654358 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:15:47.654618 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-10 09:15:47.654653 | orchestrator | } 2025-02-10 09:15:47.655757 | orchestrator | 2025-02-10 09:15:47.655793 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-10 09:15:47.659985 | orchestrator | Monday 10 February 2025 09:15:47 +0000 (0:00:00.169) 0:00:17.017 ******* 2025-02-10 09:15:48.601003 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:48.601264 | orchestrator | 2025-02-10 09:15:48.601912 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-10 09:15:48.603156 | orchestrator | Monday 10 February 2025 09:15:48 +0000 (0:00:00.944) 0:00:17.962 ******* 2025-02-10 09:15:49.102183 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:49.102577 | orchestrator | 2025-02-10 09:15:49.102624 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-10 09:15:49.103090 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.502) 0:00:18.464 ******* 2025-02-10 09:15:49.618162 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:49.618357 | orchestrator | 2025-02-10 09:15:49.619097 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-10 09:15:49.619155 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.515) 0:00:18.980 ******* 2025-02-10 09:15:49.760109 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:49.760355 | orchestrator | 2025-02-10 09:15:49.760656 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-10 09:15:49.760691 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.143) 0:00:19.123 ******* 2025-02-10 09:15:49.881258 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:49.881576 | orchestrator | 2025-02-10 09:15:49.881617 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-10 09:15:49.882120 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.121) 0:00:19.245 ******* 2025-02-10 09:15:49.994845 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:49.995028 | orchestrator | 2025-02-10 09:15:49.996120 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-10 09:15:49.996876 | orchestrator | Monday 10 February 2025 09:15:49 +0000 (0:00:00.112) 0:00:19.357 ******* 2025-02-10 09:15:50.152943 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:15:50.154144 | orchestrator |  "vgs_report": { 2025-02-10 09:15:50.154487 | orchestrator |  "vg": [] 2025-02-10 09:15:50.155865 | orchestrator |  } 2025-02-10 09:15:50.155961 | orchestrator | } 2025-02-10 09:15:50.156971 | orchestrator | 2025-02-10 09:15:50.157446 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-10 09:15:50.158504 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.158) 0:00:19.516 ******* 2025-02-10 09:15:50.296435 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:50.297710 | orchestrator | 2025-02-10 09:15:50.297760 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-10 09:15:50.298101 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.138) 0:00:19.655 ******* 2025-02-10 09:15:50.441856 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:50.442154 | orchestrator | 2025-02-10 09:15:50.442192 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-10 09:15:50.442729 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.149) 0:00:19.805 ******* 2025-02-10 09:15:50.590871 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:50.591398 | orchestrator | 2025-02-10 09:15:50.592093 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-10 09:15:50.592637 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.149) 0:00:19.954 ******* 2025-02-10 09:15:50.736241 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:50.737949 | orchestrator | 2025-02-10 09:15:50.738003 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-10 09:15:50.738172 | orchestrator | Monday 10 February 2025 09:15:50 +0000 (0:00:00.146) 0:00:20.100 ******* 2025-02-10 09:15:51.080810 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.081064 | orchestrator | 2025-02-10 09:15:51.082545 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-10 09:15:51.083441 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.342) 0:00:20.442 ******* 2025-02-10 09:15:51.230887 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.231540 | orchestrator | 2025-02-10 09:15:51.232139 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-10 09:15:51.232811 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.149) 0:00:20.592 ******* 2025-02-10 09:15:51.369053 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.369649 | orchestrator | 2025-02-10 09:15:51.369700 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-10 09:15:51.371783 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.139) 0:00:20.732 ******* 2025-02-10 09:15:51.531057 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.531722 | orchestrator | 2025-02-10 09:15:51.534660 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-10 09:15:51.673173 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.161) 0:00:20.893 ******* 2025-02-10 09:15:51.673329 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.673489 | orchestrator | 2025-02-10 09:15:51.673514 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-10 09:15:51.673534 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.142) 0:00:21.036 ******* 2025-02-10 09:15:51.814406 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.814634 | orchestrator | 2025-02-10 09:15:51.815456 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-10 09:15:51.817977 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.140) 0:00:21.177 ******* 2025-02-10 09:15:51.958544 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:51.958759 | orchestrator | 2025-02-10 09:15:51.958804 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-10 09:15:51.959346 | orchestrator | Monday 10 February 2025 09:15:51 +0000 (0:00:00.144) 0:00:21.321 ******* 2025-02-10 09:15:52.112264 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:52.255564 | orchestrator | 2025-02-10 09:15:52.255695 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-10 09:15:52.255713 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.154) 0:00:21.475 ******* 2025-02-10 09:15:52.255774 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:52.255877 | orchestrator | 2025-02-10 09:15:52.258281 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-10 09:15:52.393551 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.142) 0:00:21.618 ******* 2025-02-10 09:15:52.393742 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:52.393915 | orchestrator | 2025-02-10 09:15:52.394862 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-10 09:15:52.396263 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.139) 0:00:21.757 ******* 2025-02-10 09:15:52.583098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:52.583695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:52.584311 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:52.585022 | orchestrator | 2025-02-10 09:15:52.585864 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-10 09:15:52.586596 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.188) 0:00:21.946 ******* 2025-02-10 09:15:52.748993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:52.749668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:52.750222 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:52.752462 | orchestrator | 2025-02-10 09:15:53.117992 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-10 09:15:53.118211 | orchestrator | Monday 10 February 2025 09:15:52 +0000 (0:00:00.165) 0:00:22.111 ******* 2025-02-10 09:15:53.118250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:53.118487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:53.119070 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:53.120091 | orchestrator | 2025-02-10 09:15:53.120538 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-10 09:15:53.121534 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.369) 0:00:22.481 ******* 2025-02-10 09:15:53.284981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:53.285762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:53.286255 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:53.287247 | orchestrator | 2025-02-10 09:15:53.287830 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-10 09:15:53.288747 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.165) 0:00:22.646 ******* 2025-02-10 09:15:53.456331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:53.456537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:53.456603 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:53.456669 | orchestrator | 2025-02-10 09:15:53.456961 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-10 09:15:53.457226 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.173) 0:00:22.820 ******* 2025-02-10 09:15:53.625630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:53.625806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:53.625873 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:53.626301 | orchestrator | 2025-02-10 09:15:53.626676 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-10 09:15:53.628155 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.168) 0:00:22.989 ******* 2025-02-10 09:15:53.795666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:53.944979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:53.945126 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:53.945148 | orchestrator | 2025-02-10 09:15:53.945164 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-10 09:15:53.945180 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.168) 0:00:23.157 ******* 2025-02-10 09:15:53.945212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:53.945331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:53.946154 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:53.946811 | orchestrator | 2025-02-10 09:15:53.947947 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-10 09:15:53.948708 | orchestrator | Monday 10 February 2025 09:15:53 +0000 (0:00:00.151) 0:00:23.308 ******* 2025-02-10 09:15:54.487875 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:54.488106 | orchestrator | 2025-02-10 09:15:54.488135 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-10 09:15:54.488162 | orchestrator | Monday 10 February 2025 09:15:54 +0000 (0:00:00.541) 0:00:23.849 ******* 2025-02-10 09:15:55.057031 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:55.057613 | orchestrator | 2025-02-10 09:15:55.058505 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-10 09:15:55.059622 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.568) 0:00:24.418 ******* 2025-02-10 09:15:55.221206 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:15:55.221683 | orchestrator | 2025-02-10 09:15:55.221715 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-10 09:15:55.221735 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.165) 0:00:24.583 ******* 2025-02-10 09:15:55.405777 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'vg_name': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'}) 2025-02-10 09:15:55.406172 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'vg_name': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'}) 2025-02-10 09:15:55.406496 | orchestrator | 2025-02-10 09:15:55.406848 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-10 09:15:55.407354 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.184) 0:00:24.768 ******* 2025-02-10 09:15:55.803974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:55.804947 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:55.806089 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:55.807222 | orchestrator | 2025-02-10 09:15:55.807821 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-10 09:15:55.808472 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.397) 0:00:25.165 ******* 2025-02-10 09:15:55.978793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:55.979435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:55.982477 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:55.983345 | orchestrator | 2025-02-10 09:15:55.983403 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-10 09:15:55.983429 | orchestrator | Monday 10 February 2025 09:15:55 +0000 (0:00:00.176) 0:00:25.341 ******* 2025-02-10 09:15:56.163615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'})  2025-02-10 09:15:56.163809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'})  2025-02-10 09:15:56.164722 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:15:56.165171 | orchestrator | 2025-02-10 09:15:56.167817 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-10 09:15:56.878454 | orchestrator | Monday 10 February 2025 09:15:56 +0000 (0:00:00.185) 0:00:25.527 ******* 2025-02-10 09:15:56.879657 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:15:56.879750 | orchestrator |  "lvm_report": { 2025-02-10 09:15:56.879768 | orchestrator |  "lv": [ 2025-02-10 09:15:56.879781 | orchestrator |  { 2025-02-10 09:15:56.879795 | orchestrator |  "lv_name": "osd-block-a3ebd317-95a0-5383-a134-14be01baa44d", 2025-02-10 09:15:56.879813 | orchestrator |  "vg_name": "ceph-a3ebd317-95a0-5383-a134-14be01baa44d" 2025-02-10 09:15:56.880589 | orchestrator |  }, 2025-02-10 09:15:56.881129 | orchestrator |  { 2025-02-10 09:15:56.882165 | orchestrator |  "lv_name": "osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7", 2025-02-10 09:15:56.882752 | orchestrator |  "vg_name": "ceph-f024456c-4135-5029-bf0e-13fb105dc5b7" 2025-02-10 09:15:56.883618 | orchestrator |  } 2025-02-10 09:15:56.884321 | orchestrator |  ], 2025-02-10 09:15:56.885456 | orchestrator |  "pv": [ 2025-02-10 09:15:56.886554 | orchestrator |  { 2025-02-10 09:15:56.887250 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-10 09:15:56.887849 | orchestrator |  "vg_name": "ceph-f024456c-4135-5029-bf0e-13fb105dc5b7" 2025-02-10 09:15:56.888645 | orchestrator |  }, 2025-02-10 09:15:56.888822 | orchestrator |  { 2025-02-10 09:15:56.889180 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-10 09:15:56.889581 | orchestrator |  "vg_name": "ceph-a3ebd317-95a0-5383-a134-14be01baa44d" 2025-02-10 09:15:56.889969 | orchestrator |  } 2025-02-10 09:15:56.890453 | orchestrator |  ] 2025-02-10 09:15:56.890780 | orchestrator |  } 2025-02-10 09:15:56.891258 | orchestrator | } 2025-02-10 09:15:56.891585 | orchestrator | 2025-02-10 09:15:56.892160 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-10 09:15:56.892333 | orchestrator | 2025-02-10 09:15:56.894773 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:15:56.894934 | orchestrator | Monday 10 February 2025 09:15:56 +0000 (0:00:00.710) 0:00:26.237 ******* 2025-02-10 09:15:57.446497 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-10 09:15:57.446718 | orchestrator | 2025-02-10 09:15:57.447208 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:15:57.448696 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.571) 0:00:26.809 ******* 2025-02-10 09:15:57.681294 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:15:57.682188 | orchestrator | 2025-02-10 09:15:57.682934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:57.684600 | orchestrator | Monday 10 February 2025 09:15:57 +0000 (0:00:00.234) 0:00:27.043 ******* 2025-02-10 09:15:58.156465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:15:58.157003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:15:58.157053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:15:58.157717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:15:58.157935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:15:58.158527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:15:58.159104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:15:58.159623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:15:58.160330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-10 09:15:58.160988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:15:58.161831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:15:58.162791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:15:58.163411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:15:58.164672 | orchestrator | 2025-02-10 09:15:58.164866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:58.165327 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.475) 0:00:27.518 ******* 2025-02-10 09:15:58.340444 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:58.341494 | orchestrator | 2025-02-10 09:15:58.342983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:58.531777 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.183) 0:00:27.702 ******* 2025-02-10 09:15:58.531916 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:58.532031 | orchestrator | 2025-02-10 09:15:58.534498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:58.535487 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.192) 0:00:27.895 ******* 2025-02-10 09:15:58.742003 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:58.742278 | orchestrator | 2025-02-10 09:15:58.742313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:58.743279 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.209) 0:00:28.105 ******* 2025-02-10 09:15:58.941230 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:58.941506 | orchestrator | 2025-02-10 09:15:58.941541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:58.941893 | orchestrator | Monday 10 February 2025 09:15:58 +0000 (0:00:00.199) 0:00:28.304 ******* 2025-02-10 09:15:59.141952 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:59.142238 | orchestrator | 2025-02-10 09:15:59.142859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:59.143077 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.200) 0:00:28.505 ******* 2025-02-10 09:15:59.365247 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:59.365494 | orchestrator | 2025-02-10 09:15:59.366354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:59.367186 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.220) 0:00:28.726 ******* 2025-02-10 09:15:59.579019 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:59.581101 | orchestrator | 2025-02-10 09:15:59.582165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:59.582637 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.214) 0:00:28.940 ******* 2025-02-10 09:15:59.994329 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:15:59.994682 | orchestrator | 2025-02-10 09:15:59.995463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:15:59.995969 | orchestrator | Monday 10 February 2025 09:15:59 +0000 (0:00:00.416) 0:00:29.357 ******* 2025-02-10 09:16:00.444443 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899) 2025-02-10 09:16:00.445111 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899) 2025-02-10 09:16:00.446440 | orchestrator | 2025-02-10 09:16:00.447280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:00.447346 | orchestrator | Monday 10 February 2025 09:16:00 +0000 (0:00:00.450) 0:00:29.807 ******* 2025-02-10 09:16:00.912832 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f) 2025-02-10 09:16:00.913072 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f) 2025-02-10 09:16:00.913098 | orchestrator | 2025-02-10 09:16:00.913129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:00.913470 | orchestrator | Monday 10 February 2025 09:16:00 +0000 (0:00:00.468) 0:00:30.275 ******* 2025-02-10 09:16:01.381141 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334) 2025-02-10 09:16:01.886678 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334) 2025-02-10 09:16:01.887560 | orchestrator | 2025-02-10 09:16:01.887590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:01.887600 | orchestrator | Monday 10 February 2025 09:16:01 +0000 (0:00:00.465) 0:00:30.741 ******* 2025-02-10 09:16:01.887624 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c) 2025-02-10 09:16:01.888093 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c) 2025-02-10 09:16:01.888112 | orchestrator | 2025-02-10 09:16:01.889263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:01.889925 | orchestrator | Monday 10 February 2025 09:16:01 +0000 (0:00:00.506) 0:00:31.248 ******* 2025-02-10 09:16:02.260714 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:16:02.261209 | orchestrator | 2025-02-10 09:16:02.262303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:02.265349 | orchestrator | Monday 10 February 2025 09:16:02 +0000 (0:00:00.375) 0:00:31.623 ******* 2025-02-10 09:16:02.804950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-10 09:16:02.805253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-10 09:16:02.805294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-10 09:16:02.805952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-10 09:16:02.806201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-10 09:16:02.806793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-10 09:16:02.807150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-10 09:16:02.807737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-10 09:16:02.808053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-10 09:16:02.808956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-10 09:16:02.809151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-10 09:16:02.809212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-10 09:16:02.809339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-10 09:16:02.809991 | orchestrator | 2025-02-10 09:16:02.810162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:02.810267 | orchestrator | Monday 10 February 2025 09:16:02 +0000 (0:00:00.540) 0:00:32.164 ******* 2025-02-10 09:16:03.031000 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:03.031239 | orchestrator | 2025-02-10 09:16:03.032273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:03.033183 | orchestrator | Monday 10 February 2025 09:16:03 +0000 (0:00:00.226) 0:00:32.390 ******* 2025-02-10 09:16:03.233239 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:03.233513 | orchestrator | 2025-02-10 09:16:03.234061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:03.234360 | orchestrator | Monday 10 February 2025 09:16:03 +0000 (0:00:00.206) 0:00:32.597 ******* 2025-02-10 09:16:03.676199 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:03.676657 | orchestrator | 2025-02-10 09:16:03.677342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:03.678223 | orchestrator | Monday 10 February 2025 09:16:03 +0000 (0:00:00.441) 0:00:33.038 ******* 2025-02-10 09:16:03.899910 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:03.900173 | orchestrator | 2025-02-10 09:16:03.901486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:03.901874 | orchestrator | Monday 10 February 2025 09:16:03 +0000 (0:00:00.224) 0:00:33.263 ******* 2025-02-10 09:16:04.124850 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:04.127662 | orchestrator | 2025-02-10 09:16:04.127892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:04.128932 | orchestrator | Monday 10 February 2025 09:16:04 +0000 (0:00:00.223) 0:00:33.486 ******* 2025-02-10 09:16:04.337616 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:04.337850 | orchestrator | 2025-02-10 09:16:04.338526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:04.342086 | orchestrator | Monday 10 February 2025 09:16:04 +0000 (0:00:00.212) 0:00:33.699 ******* 2025-02-10 09:16:04.552366 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:04.552864 | orchestrator | 2025-02-10 09:16:04.553446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:04.554105 | orchestrator | Monday 10 February 2025 09:16:04 +0000 (0:00:00.214) 0:00:33.914 ******* 2025-02-10 09:16:04.758567 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:04.759754 | orchestrator | 2025-02-10 09:16:04.760095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:04.761977 | orchestrator | Monday 10 February 2025 09:16:04 +0000 (0:00:00.207) 0:00:34.121 ******* 2025-02-10 09:16:05.484487 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-10 09:16:05.484731 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-10 09:16:05.484766 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-10 09:16:05.485090 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-10 09:16:05.486527 | orchestrator | 2025-02-10 09:16:05.690149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:05.690362 | orchestrator | Monday 10 February 2025 09:16:05 +0000 (0:00:00.724) 0:00:34.846 ******* 2025-02-10 09:16:05.690467 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:05.690633 | orchestrator | 2025-02-10 09:16:05.893096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:05.893236 | orchestrator | Monday 10 February 2025 09:16:05 +0000 (0:00:00.206) 0:00:35.053 ******* 2025-02-10 09:16:05.893274 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:05.893532 | orchestrator | 2025-02-10 09:16:05.893562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:05.893637 | orchestrator | Monday 10 February 2025 09:16:05 +0000 (0:00:00.198) 0:00:35.251 ******* 2025-02-10 09:16:06.100952 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:06.101147 | orchestrator | 2025-02-10 09:16:06.102625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:06.103480 | orchestrator | Monday 10 February 2025 09:16:06 +0000 (0:00:00.210) 0:00:35.462 ******* 2025-02-10 09:16:06.762612 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:06.762858 | orchestrator | 2025-02-10 09:16:06.763225 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-10 09:16:06.763511 | orchestrator | Monday 10 February 2025 09:16:06 +0000 (0:00:00.662) 0:00:36.125 ******* 2025-02-10 09:16:06.902354 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:06.903448 | orchestrator | 2025-02-10 09:16:06.905689 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-10 09:16:07.124371 | orchestrator | Monday 10 February 2025 09:16:06 +0000 (0:00:00.139) 0:00:36.264 ******* 2025-02-10 09:16:07.124607 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8f95f397-c0f5-5bc9-9af0-9f577faebed9'}}) 2025-02-10 09:16:07.124687 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '204ceda1-8353-534a-a397-2ce8fe516c0b'}}) 2025-02-10 09:16:07.124706 | orchestrator | 2025-02-10 09:16:07.124721 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-10 09:16:07.124740 | orchestrator | Monday 10 February 2025 09:16:07 +0000 (0:00:00.222) 0:00:36.487 ******* 2025-02-10 09:16:08.910832 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'}) 2025-02-10 09:16:08.911089 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'}) 2025-02-10 09:16:08.912374 | orchestrator | 2025-02-10 09:16:08.913780 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-10 09:16:08.915816 | orchestrator | Monday 10 February 2025 09:16:08 +0000 (0:00:01.783) 0:00:38.271 ******* 2025-02-10 09:16:09.085886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:09.086494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:09.086546 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:09.088136 | orchestrator | 2025-02-10 09:16:09.088259 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-10 09:16:09.089612 | orchestrator | Monday 10 February 2025 09:16:09 +0000 (0:00:00.177) 0:00:38.448 ******* 2025-02-10 09:16:10.436355 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'}) 2025-02-10 09:16:10.438958 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'}) 2025-02-10 09:16:10.439888 | orchestrator | 2025-02-10 09:16:10.440062 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-10 09:16:10.440603 | orchestrator | Monday 10 February 2025 09:16:10 +0000 (0:00:01.348) 0:00:39.797 ******* 2025-02-10 09:16:10.606336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:10.607132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:10.607183 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:10.607927 | orchestrator | 2025-02-10 09:16:10.609215 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-10 09:16:10.609761 | orchestrator | Monday 10 February 2025 09:16:10 +0000 (0:00:00.171) 0:00:39.968 ******* 2025-02-10 09:16:10.747374 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:10.747671 | orchestrator | 2025-02-10 09:16:10.748857 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-10 09:16:10.749179 | orchestrator | Monday 10 February 2025 09:16:10 +0000 (0:00:00.141) 0:00:40.109 ******* 2025-02-10 09:16:10.918226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:10.919745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:10.920006 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:10.921164 | orchestrator | 2025-02-10 09:16:10.921640 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-10 09:16:10.922484 | orchestrator | Monday 10 February 2025 09:16:10 +0000 (0:00:00.170) 0:00:40.280 ******* 2025-02-10 09:16:11.269718 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:11.270158 | orchestrator | 2025-02-10 09:16:11.270205 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-10 09:16:11.270604 | orchestrator | Monday 10 February 2025 09:16:11 +0000 (0:00:00.351) 0:00:40.631 ******* 2025-02-10 09:16:11.466515 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:11.466910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:11.467429 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:11.467624 | orchestrator | 2025-02-10 09:16:11.468269 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-10 09:16:11.468797 | orchestrator | Monday 10 February 2025 09:16:11 +0000 (0:00:00.198) 0:00:40.830 ******* 2025-02-10 09:16:11.604020 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:11.604191 | orchestrator | 2025-02-10 09:16:11.604213 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-10 09:16:11.604236 | orchestrator | Monday 10 February 2025 09:16:11 +0000 (0:00:00.136) 0:00:40.967 ******* 2025-02-10 09:16:11.781652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:11.785376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:11.935819 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:11.936696 | orchestrator | 2025-02-10 09:16:11.936717 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-10 09:16:11.936727 | orchestrator | Monday 10 February 2025 09:16:11 +0000 (0:00:00.176) 0:00:41.143 ******* 2025-02-10 09:16:11.936750 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:11.938739 | orchestrator | 2025-02-10 09:16:12.120064 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-10 09:16:12.120211 | orchestrator | Monday 10 February 2025 09:16:11 +0000 (0:00:00.154) 0:00:41.298 ******* 2025-02-10 09:16:12.120250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:12.120586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:12.121365 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:12.124691 | orchestrator | 2025-02-10 09:16:12.124887 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-10 09:16:12.124937 | orchestrator | Monday 10 February 2025 09:16:12 +0000 (0:00:00.181) 0:00:41.480 ******* 2025-02-10 09:16:12.360119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:12.360612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:12.361425 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:12.362336 | orchestrator | 2025-02-10 09:16:12.362840 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-10 09:16:12.363414 | orchestrator | Monday 10 February 2025 09:16:12 +0000 (0:00:00.242) 0:00:41.722 ******* 2025-02-10 09:16:12.535504 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:12.535746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:12.536333 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:12.536871 | orchestrator | 2025-02-10 09:16:12.538078 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-10 09:16:12.539109 | orchestrator | Monday 10 February 2025 09:16:12 +0000 (0:00:00.176) 0:00:41.899 ******* 2025-02-10 09:16:12.683573 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:12.684820 | orchestrator | 2025-02-10 09:16:12.687169 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-10 09:16:12.687214 | orchestrator | Monday 10 February 2025 09:16:12 +0000 (0:00:00.146) 0:00:42.046 ******* 2025-02-10 09:16:12.834659 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:12.835641 | orchestrator | 2025-02-10 09:16:12.838336 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-10 09:16:12.972085 | orchestrator | Monday 10 February 2025 09:16:12 +0000 (0:00:00.151) 0:00:42.197 ******* 2025-02-10 09:16:12.972237 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:12.975299 | orchestrator | 2025-02-10 09:16:12.976598 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-10 09:16:12.976633 | orchestrator | Monday 10 February 2025 09:16:12 +0000 (0:00:00.136) 0:00:42.334 ******* 2025-02-10 09:16:13.321315 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:16:13.322125 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-10 09:16:13.325780 | orchestrator | } 2025-02-10 09:16:13.326358 | orchestrator | 2025-02-10 09:16:13.326419 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-10 09:16:13.475789 | orchestrator | Monday 10 February 2025 09:16:13 +0000 (0:00:00.349) 0:00:42.684 ******* 2025-02-10 09:16:13.475917 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:16:13.475961 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-10 09:16:13.476222 | orchestrator | } 2025-02-10 09:16:13.476753 | orchestrator | 2025-02-10 09:16:13.477232 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-10 09:16:13.477955 | orchestrator | Monday 10 February 2025 09:16:13 +0000 (0:00:00.155) 0:00:42.839 ******* 2025-02-10 09:16:13.606224 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:16:13.606478 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-10 09:16:13.606508 | orchestrator | } 2025-02-10 09:16:13.607024 | orchestrator | 2025-02-10 09:16:13.607603 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-10 09:16:13.608376 | orchestrator | Monday 10 February 2025 09:16:13 +0000 (0:00:00.130) 0:00:42.969 ******* 2025-02-10 09:16:14.139756 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:14.140888 | orchestrator | 2025-02-10 09:16:14.140930 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-10 09:16:14.142186 | orchestrator | Monday 10 February 2025 09:16:14 +0000 (0:00:00.530) 0:00:43.499 ******* 2025-02-10 09:16:14.634133 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:14.635200 | orchestrator | 2025-02-10 09:16:14.636108 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-10 09:16:14.636145 | orchestrator | Monday 10 February 2025 09:16:14 +0000 (0:00:00.495) 0:00:43.995 ******* 2025-02-10 09:16:15.207026 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:15.208339 | orchestrator | 2025-02-10 09:16:15.209064 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-10 09:16:15.209153 | orchestrator | Monday 10 February 2025 09:16:15 +0000 (0:00:00.572) 0:00:44.567 ******* 2025-02-10 09:16:15.362834 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:15.363245 | orchestrator | 2025-02-10 09:16:15.363266 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-10 09:16:15.363915 | orchestrator | Monday 10 February 2025 09:16:15 +0000 (0:00:00.158) 0:00:44.725 ******* 2025-02-10 09:16:15.496124 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:15.496283 | orchestrator | 2025-02-10 09:16:15.496308 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-10 09:16:15.496486 | orchestrator | Monday 10 February 2025 09:16:15 +0000 (0:00:00.132) 0:00:44.858 ******* 2025-02-10 09:16:15.623141 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:15.624063 | orchestrator | 2025-02-10 09:16:15.624971 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-10 09:16:15.625640 | orchestrator | Monday 10 February 2025 09:16:15 +0000 (0:00:00.126) 0:00:44.984 ******* 2025-02-10 09:16:15.769689 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:16:15.769922 | orchestrator |  "vgs_report": { 2025-02-10 09:16:15.770456 | orchestrator |  "vg": [] 2025-02-10 09:16:15.771195 | orchestrator |  } 2025-02-10 09:16:15.772054 | orchestrator | } 2025-02-10 09:16:15.773368 | orchestrator | 2025-02-10 09:16:15.774102 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-10 09:16:15.774968 | orchestrator | Monday 10 February 2025 09:16:15 +0000 (0:00:00.146) 0:00:45.131 ******* 2025-02-10 09:16:15.907080 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:15.907290 | orchestrator | 2025-02-10 09:16:15.907735 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-10 09:16:15.908678 | orchestrator | Monday 10 February 2025 09:16:15 +0000 (0:00:00.138) 0:00:45.269 ******* 2025-02-10 09:16:16.250527 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:16.250772 | orchestrator | 2025-02-10 09:16:16.251858 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-10 09:16:16.252516 | orchestrator | Monday 10 February 2025 09:16:16 +0000 (0:00:00.342) 0:00:45.611 ******* 2025-02-10 09:16:16.421608 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:16.424775 | orchestrator | 2025-02-10 09:16:16.424852 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-10 09:16:16.425125 | orchestrator | Monday 10 February 2025 09:16:16 +0000 (0:00:00.170) 0:00:45.782 ******* 2025-02-10 09:16:16.566384 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:16.566632 | orchestrator | 2025-02-10 09:16:16.567949 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-10 09:16:16.727951 | orchestrator | Monday 10 February 2025 09:16:16 +0000 (0:00:00.146) 0:00:45.929 ******* 2025-02-10 09:16:16.728139 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:16.728221 | orchestrator | 2025-02-10 09:16:16.728699 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-10 09:16:16.729781 | orchestrator | Monday 10 February 2025 09:16:16 +0000 (0:00:00.160) 0:00:46.089 ******* 2025-02-10 09:16:16.864274 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:16.868135 | orchestrator | 2025-02-10 09:16:16.868936 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-10 09:16:16.868978 | orchestrator | Monday 10 February 2025 09:16:16 +0000 (0:00:00.137) 0:00:46.227 ******* 2025-02-10 09:16:17.003592 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.004372 | orchestrator | 2025-02-10 09:16:17.004469 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-10 09:16:17.007729 | orchestrator | Monday 10 February 2025 09:16:16 +0000 (0:00:00.138) 0:00:46.366 ******* 2025-02-10 09:16:17.157204 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.157559 | orchestrator | 2025-02-10 09:16:17.157692 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-10 09:16:17.157937 | orchestrator | Monday 10 February 2025 09:16:17 +0000 (0:00:00.153) 0:00:46.519 ******* 2025-02-10 09:16:17.292846 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.439959 | orchestrator | 2025-02-10 09:16:17.440092 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-10 09:16:17.440134 | orchestrator | Monday 10 February 2025 09:16:17 +0000 (0:00:00.135) 0:00:46.655 ******* 2025-02-10 09:16:17.440167 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.440263 | orchestrator | 2025-02-10 09:16:17.441118 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-10 09:16:17.441840 | orchestrator | Monday 10 February 2025 09:16:17 +0000 (0:00:00.146) 0:00:46.801 ******* 2025-02-10 09:16:17.581164 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.582231 | orchestrator | 2025-02-10 09:16:17.583174 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-10 09:16:17.583619 | orchestrator | Monday 10 February 2025 09:16:17 +0000 (0:00:00.143) 0:00:46.944 ******* 2025-02-10 09:16:17.736936 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.737148 | orchestrator | 2025-02-10 09:16:17.740626 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-10 09:16:17.875636 | orchestrator | Monday 10 February 2025 09:16:17 +0000 (0:00:00.154) 0:00:47.099 ******* 2025-02-10 09:16:17.875793 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:17.875921 | orchestrator | 2025-02-10 09:16:17.877708 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-10 09:16:18.234388 | orchestrator | Monday 10 February 2025 09:16:17 +0000 (0:00:00.138) 0:00:47.238 ******* 2025-02-10 09:16:18.234579 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:18.234727 | orchestrator | 2025-02-10 09:16:18.235464 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-10 09:16:18.236136 | orchestrator | Monday 10 February 2025 09:16:18 +0000 (0:00:00.358) 0:00:47.596 ******* 2025-02-10 09:16:18.432841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:18.600694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:18.600813 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:18.600826 | orchestrator | 2025-02-10 09:16:18.600838 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-10 09:16:18.600849 | orchestrator | Monday 10 February 2025 09:16:18 +0000 (0:00:00.198) 0:00:47.795 ******* 2025-02-10 09:16:18.600872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:18.601100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:18.601478 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:18.601718 | orchestrator | 2025-02-10 09:16:18.602135 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-10 09:16:18.602389 | orchestrator | Monday 10 February 2025 09:16:18 +0000 (0:00:00.169) 0:00:47.964 ******* 2025-02-10 09:16:18.770696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:18.770919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:18.770949 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:18.771974 | orchestrator | 2025-02-10 09:16:18.772515 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-10 09:16:18.773307 | orchestrator | Monday 10 February 2025 09:16:18 +0000 (0:00:00.168) 0:00:48.133 ******* 2025-02-10 09:16:18.938257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:18.939471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:18.940050 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:18.941034 | orchestrator | 2025-02-10 09:16:18.941751 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-10 09:16:18.942489 | orchestrator | Monday 10 February 2025 09:16:18 +0000 (0:00:00.167) 0:00:48.300 ******* 2025-02-10 09:16:19.122711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:19.123620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:19.123663 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:19.124614 | orchestrator | 2025-02-10 09:16:19.124852 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-10 09:16:19.124870 | orchestrator | Monday 10 February 2025 09:16:19 +0000 (0:00:00.183) 0:00:48.483 ******* 2025-02-10 09:16:19.306251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:19.306595 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:19.306885 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:19.310183 | orchestrator | 2025-02-10 09:16:19.485297 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-10 09:16:19.485540 | orchestrator | Monday 10 February 2025 09:16:19 +0000 (0:00:00.185) 0:00:48.669 ******* 2025-02-10 09:16:19.485599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:19.488486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:19.488836 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:19.488879 | orchestrator | 2025-02-10 09:16:19.489845 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-10 09:16:19.490853 | orchestrator | Monday 10 February 2025 09:16:19 +0000 (0:00:00.177) 0:00:48.846 ******* 2025-02-10 09:16:19.656748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:19.657038 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:19.657390 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:19.658321 | orchestrator | 2025-02-10 09:16:19.659185 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-10 09:16:19.659565 | orchestrator | Monday 10 February 2025 09:16:19 +0000 (0:00:00.172) 0:00:49.019 ******* 2025-02-10 09:16:20.181393 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:20.183786 | orchestrator | 2025-02-10 09:16:20.183865 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-10 09:16:20.707614 | orchestrator | Monday 10 February 2025 09:16:20 +0000 (0:00:00.522) 0:00:49.542 ******* 2025-02-10 09:16:20.707813 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:20.707896 | orchestrator | 2025-02-10 09:16:20.709840 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-10 09:16:20.710633 | orchestrator | Monday 10 February 2025 09:16:20 +0000 (0:00:00.524) 0:00:50.066 ******* 2025-02-10 09:16:21.079129 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:21.249546 | orchestrator | 2025-02-10 09:16:21.249670 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-10 09:16:21.249684 | orchestrator | Monday 10 February 2025 09:16:21 +0000 (0:00:00.370) 0:00:50.437 ******* 2025-02-10 09:16:21.249711 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'vg_name': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'}) 2025-02-10 09:16:21.250106 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'vg_name': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'}) 2025-02-10 09:16:21.250130 | orchestrator | 2025-02-10 09:16:21.251770 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-10 09:16:21.252264 | orchestrator | Monday 10 February 2025 09:16:21 +0000 (0:00:00.172) 0:00:50.610 ******* 2025-02-10 09:16:21.425931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:21.426589 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:21.426627 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:21.428917 | orchestrator | 2025-02-10 09:16:21.429836 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-10 09:16:21.430381 | orchestrator | Monday 10 February 2025 09:16:21 +0000 (0:00:00.178) 0:00:50.788 ******* 2025-02-10 09:16:21.592388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:21.592678 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:21.593642 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:21.594760 | orchestrator | 2025-02-10 09:16:21.595064 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-10 09:16:21.595572 | orchestrator | Monday 10 February 2025 09:16:21 +0000 (0:00:00.167) 0:00:50.955 ******* 2025-02-10 09:16:21.771446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'})  2025-02-10 09:16:21.772499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'})  2025-02-10 09:16:21.773530 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:21.774531 | orchestrator | 2025-02-10 09:16:21.775159 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-10 09:16:21.775854 | orchestrator | Monday 10 February 2025 09:16:21 +0000 (0:00:00.178) 0:00:51.134 ******* 2025-02-10 09:16:22.655178 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:16:22.655927 | orchestrator |  "lvm_report": { 2025-02-10 09:16:22.658480 | orchestrator |  "lv": [ 2025-02-10 09:16:22.660691 | orchestrator |  { 2025-02-10 09:16:22.660710 | orchestrator |  "lv_name": "osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b", 2025-02-10 09:16:22.661123 | orchestrator |  "vg_name": "ceph-204ceda1-8353-534a-a397-2ce8fe516c0b" 2025-02-10 09:16:22.661982 | orchestrator |  }, 2025-02-10 09:16:22.662215 | orchestrator |  { 2025-02-10 09:16:22.662983 | orchestrator |  "lv_name": "osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9", 2025-02-10 09:16:22.663349 | orchestrator |  "vg_name": "ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9" 2025-02-10 09:16:22.663860 | orchestrator |  } 2025-02-10 09:16:22.664517 | orchestrator |  ], 2025-02-10 09:16:22.665038 | orchestrator |  "pv": [ 2025-02-10 09:16:22.666253 | orchestrator |  { 2025-02-10 09:16:22.667093 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-10 09:16:22.668017 | orchestrator |  "vg_name": "ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9" 2025-02-10 09:16:22.668920 | orchestrator |  }, 2025-02-10 09:16:22.669102 | orchestrator |  { 2025-02-10 09:16:22.669732 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-10 09:16:22.670123 | orchestrator |  "vg_name": "ceph-204ceda1-8353-534a-a397-2ce8fe516c0b" 2025-02-10 09:16:22.670571 | orchestrator |  } 2025-02-10 09:16:22.670883 | orchestrator |  ] 2025-02-10 09:16:22.671586 | orchestrator |  } 2025-02-10 09:16:22.671780 | orchestrator | } 2025-02-10 09:16:22.672686 | orchestrator | 2025-02-10 09:16:22.672789 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-10 09:16:22.673476 | orchestrator | 2025-02-10 09:16:22.673998 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:16:22.674677 | orchestrator | Monday 10 February 2025 09:16:22 +0000 (0:00:00.882) 0:00:52.017 ******* 2025-02-10 09:16:22.908576 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-10 09:16:22.909113 | orchestrator | 2025-02-10 09:16:22.909159 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-10 09:16:22.909710 | orchestrator | Monday 10 February 2025 09:16:22 +0000 (0:00:00.253) 0:00:52.271 ******* 2025-02-10 09:16:23.165174 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:23.165498 | orchestrator | 2025-02-10 09:16:23.166219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:23.167638 | orchestrator | Monday 10 February 2025 09:16:23 +0000 (0:00:00.257) 0:00:52.528 ******* 2025-02-10 09:16:23.671145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:16:23.671355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:16:23.671733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:16:23.672439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:16:23.674805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:16:23.675603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:16:23.675638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:16:23.675656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:16:23.675678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-10 09:16:23.677951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:16:23.678170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:16:23.678711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:16:23.679451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:16:23.679848 | orchestrator | 2025-02-10 09:16:23.680261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:23.680756 | orchestrator | Monday 10 February 2025 09:16:23 +0000 (0:00:00.505) 0:00:53.033 ******* 2025-02-10 09:16:23.866934 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:23.867165 | orchestrator | 2025-02-10 09:16:23.867560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:23.867600 | orchestrator | Monday 10 February 2025 09:16:23 +0000 (0:00:00.196) 0:00:53.229 ******* 2025-02-10 09:16:24.096261 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:24.096878 | orchestrator | 2025-02-10 09:16:24.097058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:24.298273 | orchestrator | Monday 10 February 2025 09:16:24 +0000 (0:00:00.227) 0:00:53.457 ******* 2025-02-10 09:16:24.298493 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:24.298572 | orchestrator | 2025-02-10 09:16:24.298865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:24.299108 | orchestrator | Monday 10 February 2025 09:16:24 +0000 (0:00:00.202) 0:00:53.660 ******* 2025-02-10 09:16:24.498558 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:24.499219 | orchestrator | 2025-02-10 09:16:24.503527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:24.504102 | orchestrator | Monday 10 February 2025 09:16:24 +0000 (0:00:00.200) 0:00:53.860 ******* 2025-02-10 09:16:24.703592 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:24.704989 | orchestrator | 2025-02-10 09:16:24.705663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:24.707250 | orchestrator | Monday 10 February 2025 09:16:24 +0000 (0:00:00.205) 0:00:54.066 ******* 2025-02-10 09:16:25.130550 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:25.131290 | orchestrator | 2025-02-10 09:16:25.132652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:25.134313 | orchestrator | Monday 10 February 2025 09:16:25 +0000 (0:00:00.427) 0:00:54.493 ******* 2025-02-10 09:16:25.330107 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:25.330932 | orchestrator | 2025-02-10 09:16:25.331760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:25.333024 | orchestrator | Monday 10 February 2025 09:16:25 +0000 (0:00:00.198) 0:00:54.692 ******* 2025-02-10 09:16:25.539245 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:25.539792 | orchestrator | 2025-02-10 09:16:25.540477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:25.540714 | orchestrator | Monday 10 February 2025 09:16:25 +0000 (0:00:00.210) 0:00:54.902 ******* 2025-02-10 09:16:26.000396 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642) 2025-02-10 09:16:26.000994 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642) 2025-02-10 09:16:26.002010 | orchestrator | 2025-02-10 09:16:26.002535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:26.003232 | orchestrator | Monday 10 February 2025 09:16:25 +0000 (0:00:00.460) 0:00:55.363 ******* 2025-02-10 09:16:26.459546 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06) 2025-02-10 09:16:26.459696 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06) 2025-02-10 09:16:26.459711 | orchestrator | 2025-02-10 09:16:26.459719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:26.459731 | orchestrator | Monday 10 February 2025 09:16:26 +0000 (0:00:00.455) 0:00:55.818 ******* 2025-02-10 09:16:26.903559 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a) 2025-02-10 09:16:26.903883 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a) 2025-02-10 09:16:26.906813 | orchestrator | 2025-02-10 09:16:26.908484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:26.909595 | orchestrator | Monday 10 February 2025 09:16:26 +0000 (0:00:00.446) 0:00:56.265 ******* 2025-02-10 09:16:27.357232 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92) 2025-02-10 09:16:27.357578 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92) 2025-02-10 09:16:27.358496 | orchestrator | 2025-02-10 09:16:27.358889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-10 09:16:27.360032 | orchestrator | Monday 10 February 2025 09:16:27 +0000 (0:00:00.452) 0:00:56.718 ******* 2025-02-10 09:16:27.694818 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-10 09:16:27.696282 | orchestrator | 2025-02-10 09:16:27.698616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:27.699539 | orchestrator | Monday 10 February 2025 09:16:27 +0000 (0:00:00.338) 0:00:57.057 ******* 2025-02-10 09:16:28.169299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-10 09:16:28.169574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-10 09:16:28.169602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-10 09:16:28.169617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-10 09:16:28.169632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-10 09:16:28.169652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-10 09:16:28.170009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-10 09:16:28.170775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-10 09:16:28.170874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-10 09:16:28.171055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-10 09:16:28.171531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-10 09:16:28.171760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-10 09:16:28.172489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-10 09:16:28.172883 | orchestrator | 2025-02-10 09:16:28.172923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:28.173263 | orchestrator | Monday 10 February 2025 09:16:28 +0000 (0:00:00.469) 0:00:57.526 ******* 2025-02-10 09:16:28.759552 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:28.760740 | orchestrator | 2025-02-10 09:16:28.760949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:28.762401 | orchestrator | Monday 10 February 2025 09:16:28 +0000 (0:00:00.594) 0:00:58.120 ******* 2025-02-10 09:16:28.970681 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:28.971615 | orchestrator | 2025-02-10 09:16:28.971736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:28.973212 | orchestrator | Monday 10 February 2025 09:16:28 +0000 (0:00:00.211) 0:00:58.332 ******* 2025-02-10 09:16:29.188606 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:29.191167 | orchestrator | 2025-02-10 09:16:29.399253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:29.399395 | orchestrator | Monday 10 February 2025 09:16:29 +0000 (0:00:00.217) 0:00:58.550 ******* 2025-02-10 09:16:29.399476 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:29.399552 | orchestrator | 2025-02-10 09:16:29.399674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:29.400528 | orchestrator | Monday 10 February 2025 09:16:29 +0000 (0:00:00.210) 0:00:58.760 ******* 2025-02-10 09:16:29.623154 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:29.623916 | orchestrator | 2025-02-10 09:16:29.623962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:29.626769 | orchestrator | Monday 10 February 2025 09:16:29 +0000 (0:00:00.224) 0:00:58.985 ******* 2025-02-10 09:16:29.828698 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:29.828978 | orchestrator | 2025-02-10 09:16:30.053804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:30.053937 | orchestrator | Monday 10 February 2025 09:16:29 +0000 (0:00:00.201) 0:00:59.186 ******* 2025-02-10 09:16:30.053973 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:30.054159 | orchestrator | 2025-02-10 09:16:30.054184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:30.054205 | orchestrator | Monday 10 February 2025 09:16:30 +0000 (0:00:00.228) 0:00:59.414 ******* 2025-02-10 09:16:30.280223 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:30.280531 | orchestrator | 2025-02-10 09:16:30.281784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:30.283978 | orchestrator | Monday 10 February 2025 09:16:30 +0000 (0:00:00.228) 0:00:59.643 ******* 2025-02-10 09:16:31.158187 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-10 09:16:31.158649 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-10 09:16:31.158709 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-10 09:16:31.159247 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-10 09:16:31.159951 | orchestrator | 2025-02-10 09:16:31.160611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:31.161306 | orchestrator | Monday 10 February 2025 09:16:31 +0000 (0:00:00.876) 0:01:00.519 ******* 2025-02-10 09:16:31.360989 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:31.361487 | orchestrator | 2025-02-10 09:16:31.361982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:31.362977 | orchestrator | Monday 10 February 2025 09:16:31 +0000 (0:00:00.202) 0:01:00.722 ******* 2025-02-10 09:16:32.043871 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:32.268352 | orchestrator | 2025-02-10 09:16:32.268571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:32.268596 | orchestrator | Monday 10 February 2025 09:16:32 +0000 (0:00:00.678) 0:01:01.400 ******* 2025-02-10 09:16:32.268631 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:32.269405 | orchestrator | 2025-02-10 09:16:32.269575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-10 09:16:32.270544 | orchestrator | Monday 10 February 2025 09:16:32 +0000 (0:00:00.230) 0:01:01.631 ******* 2025-02-10 09:16:32.480980 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:32.481276 | orchestrator | 2025-02-10 09:16:32.481305 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-10 09:16:32.481330 | orchestrator | Monday 10 February 2025 09:16:32 +0000 (0:00:00.211) 0:01:01.842 ******* 2025-02-10 09:16:32.611701 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:32.611946 | orchestrator | 2025-02-10 09:16:32.612653 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-10 09:16:32.613296 | orchestrator | Monday 10 February 2025 09:16:32 +0000 (0:00:00.132) 0:01:01.974 ******* 2025-02-10 09:16:32.825883 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c468f1bf-17d5-510b-8602-ed8efc51f14c'}}) 2025-02-10 09:16:32.827084 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}}) 2025-02-10 09:16:32.827781 | orchestrator | 2025-02-10 09:16:32.828622 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-10 09:16:32.829528 | orchestrator | Monday 10 February 2025 09:16:32 +0000 (0:00:00.213) 0:01:02.187 ******* 2025-02-10 09:16:34.533719 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'}) 2025-02-10 09:16:34.533973 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}) 2025-02-10 09:16:34.533998 | orchestrator | 2025-02-10 09:16:34.534069 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-10 09:16:34.534517 | orchestrator | Monday 10 February 2025 09:16:34 +0000 (0:00:01.706) 0:01:03.894 ******* 2025-02-10 09:16:34.708538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:34.709317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:34.709369 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:34.710393 | orchestrator | 2025-02-10 09:16:34.711763 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-10 09:16:34.711920 | orchestrator | Monday 10 February 2025 09:16:34 +0000 (0:00:00.177) 0:01:04.071 ******* 2025-02-10 09:16:35.979279 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'}) 2025-02-10 09:16:35.981005 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}) 2025-02-10 09:16:35.981065 | orchestrator | 2025-02-10 09:16:35.981502 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-10 09:16:35.981532 | orchestrator | Monday 10 February 2025 09:16:35 +0000 (0:00:01.268) 0:01:05.339 ******* 2025-02-10 09:16:36.164072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:36.164256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:36.164284 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:36.164760 | orchestrator | 2025-02-10 09:16:36.164863 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-10 09:16:36.165031 | orchestrator | Monday 10 February 2025 09:16:36 +0000 (0:00:00.186) 0:01:05.526 ******* 2025-02-10 09:16:36.498971 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:36.500287 | orchestrator | 2025-02-10 09:16:36.500675 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-10 09:16:36.501256 | orchestrator | Monday 10 February 2025 09:16:36 +0000 (0:00:00.335) 0:01:05.861 ******* 2025-02-10 09:16:36.662684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:36.662888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:36.662906 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:36.662986 | orchestrator | 2025-02-10 09:16:36.663431 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-10 09:16:36.663658 | orchestrator | Monday 10 February 2025 09:16:36 +0000 (0:00:00.163) 0:01:06.024 ******* 2025-02-10 09:16:36.802129 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:36.802384 | orchestrator | 2025-02-10 09:16:36.802479 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-10 09:16:36.802876 | orchestrator | Monday 10 February 2025 09:16:36 +0000 (0:00:00.140) 0:01:06.165 ******* 2025-02-10 09:16:36.981130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:36.981504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:36.982662 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:36.983472 | orchestrator | 2025-02-10 09:16:36.983851 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-10 09:16:36.984764 | orchestrator | Monday 10 February 2025 09:16:36 +0000 (0:00:00.177) 0:01:06.343 ******* 2025-02-10 09:16:37.131207 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:37.297091 | orchestrator | 2025-02-10 09:16:37.297267 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-10 09:16:37.297303 | orchestrator | Monday 10 February 2025 09:16:37 +0000 (0:00:00.150) 0:01:06.494 ******* 2025-02-10 09:16:37.297354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:37.297582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:37.297968 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:37.299013 | orchestrator | 2025-02-10 09:16:37.299575 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-10 09:16:37.299631 | orchestrator | Monday 10 February 2025 09:16:37 +0000 (0:00:00.165) 0:01:06.659 ******* 2025-02-10 09:16:37.445968 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:37.446643 | orchestrator | 2025-02-10 09:16:37.446673 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-10 09:16:37.448235 | orchestrator | Monday 10 February 2025 09:16:37 +0000 (0:00:00.147) 0:01:06.806 ******* 2025-02-10 09:16:37.602676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:37.602890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:37.603761 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:37.604709 | orchestrator | 2025-02-10 09:16:37.605352 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-10 09:16:37.605877 | orchestrator | Monday 10 February 2025 09:16:37 +0000 (0:00:00.158) 0:01:06.966 ******* 2025-02-10 09:16:37.773863 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:37.776224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:37.777392 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:37.777478 | orchestrator | 2025-02-10 09:16:37.777510 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-10 09:16:37.777925 | orchestrator | Monday 10 February 2025 09:16:37 +0000 (0:00:00.168) 0:01:07.134 ******* 2025-02-10 09:16:37.956528 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:37.957888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:37.958131 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:37.959593 | orchestrator | 2025-02-10 09:16:37.960041 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-10 09:16:37.960845 | orchestrator | Monday 10 February 2025 09:16:37 +0000 (0:00:00.181) 0:01:07.316 ******* 2025-02-10 09:16:38.108499 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:38.108675 | orchestrator | 2025-02-10 09:16:38.110303 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-10 09:16:38.111275 | orchestrator | Monday 10 February 2025 09:16:38 +0000 (0:00:00.154) 0:01:07.471 ******* 2025-02-10 09:16:38.244164 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:38.245014 | orchestrator | 2025-02-10 09:16:38.245622 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-10 09:16:38.246346 | orchestrator | Monday 10 February 2025 09:16:38 +0000 (0:00:00.135) 0:01:07.606 ******* 2025-02-10 09:16:38.583058 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:38.583240 | orchestrator | 2025-02-10 09:16:38.583934 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-10 09:16:38.585002 | orchestrator | Monday 10 February 2025 09:16:38 +0000 (0:00:00.338) 0:01:07.945 ******* 2025-02-10 09:16:38.738912 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:16:38.740174 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-10 09:16:38.740797 | orchestrator | } 2025-02-10 09:16:38.741496 | orchestrator | 2025-02-10 09:16:38.742134 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-10 09:16:38.742653 | orchestrator | Monday 10 February 2025 09:16:38 +0000 (0:00:00.155) 0:01:08.100 ******* 2025-02-10 09:16:38.907804 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:16:38.908021 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-10 09:16:38.908051 | orchestrator | } 2025-02-10 09:16:38.909861 | orchestrator | 2025-02-10 09:16:38.911172 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-10 09:16:38.912620 | orchestrator | Monday 10 February 2025 09:16:38 +0000 (0:00:00.169) 0:01:08.269 ******* 2025-02-10 09:16:39.058989 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:16:39.059177 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-10 09:16:39.059850 | orchestrator | } 2025-02-10 09:16:39.060843 | orchestrator | 2025-02-10 09:16:39.061396 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-10 09:16:39.566242 | orchestrator | Monday 10 February 2025 09:16:39 +0000 (0:00:00.152) 0:01:08.421 ******* 2025-02-10 09:16:39.566384 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:39.567144 | orchestrator | 2025-02-10 09:16:39.567555 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-10 09:16:39.568240 | orchestrator | Monday 10 February 2025 09:16:39 +0000 (0:00:00.506) 0:01:08.928 ******* 2025-02-10 09:16:40.084170 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:40.084399 | orchestrator | 2025-02-10 09:16:40.084498 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-10 09:16:40.084524 | orchestrator | Monday 10 February 2025 09:16:40 +0000 (0:00:00.516) 0:01:09.445 ******* 2025-02-10 09:16:40.623586 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:40.623839 | orchestrator | 2025-02-10 09:16:40.623887 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-10 09:16:40.624635 | orchestrator | Monday 10 February 2025 09:16:40 +0000 (0:00:00.539) 0:01:09.984 ******* 2025-02-10 09:16:40.770841 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:40.771271 | orchestrator | 2025-02-10 09:16:40.771802 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-10 09:16:40.772348 | orchestrator | Monday 10 February 2025 09:16:40 +0000 (0:00:00.147) 0:01:10.132 ******* 2025-02-10 09:16:40.886589 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:40.887266 | orchestrator | 2025-02-10 09:16:40.887312 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-10 09:16:40.887633 | orchestrator | Monday 10 February 2025 09:16:40 +0000 (0:00:00.116) 0:01:10.248 ******* 2025-02-10 09:16:41.004792 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:41.005135 | orchestrator | 2025-02-10 09:16:41.006117 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-10 09:16:41.006649 | orchestrator | Monday 10 February 2025 09:16:40 +0000 (0:00:00.118) 0:01:10.367 ******* 2025-02-10 09:16:41.145964 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:16:41.148958 | orchestrator |  "vgs_report": { 2025-02-10 09:16:41.149022 | orchestrator |  "vg": [] 2025-02-10 09:16:41.149640 | orchestrator |  } 2025-02-10 09:16:41.150368 | orchestrator | } 2025-02-10 09:16:41.151253 | orchestrator | 2025-02-10 09:16:41.152022 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-10 09:16:41.152973 | orchestrator | Monday 10 February 2025 09:16:41 +0000 (0:00:00.140) 0:01:10.507 ******* 2025-02-10 09:16:41.471251 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:41.471891 | orchestrator | 2025-02-10 09:16:41.472113 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-10 09:16:41.472952 | orchestrator | Monday 10 February 2025 09:16:41 +0000 (0:00:00.326) 0:01:10.833 ******* 2025-02-10 09:16:41.613038 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:41.613507 | orchestrator | 2025-02-10 09:16:41.613957 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-10 09:16:41.614398 | orchestrator | Monday 10 February 2025 09:16:41 +0000 (0:00:00.141) 0:01:10.975 ******* 2025-02-10 09:16:41.756731 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:41.759864 | orchestrator | 2025-02-10 09:16:41.760638 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-10 09:16:41.760688 | orchestrator | Monday 10 February 2025 09:16:41 +0000 (0:00:00.142) 0:01:11.117 ******* 2025-02-10 09:16:41.898575 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:41.898821 | orchestrator | 2025-02-10 09:16:41.899315 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-10 09:16:41.900174 | orchestrator | Monday 10 February 2025 09:16:41 +0000 (0:00:00.143) 0:01:11.261 ******* 2025-02-10 09:16:42.044033 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.045012 | orchestrator | 2025-02-10 09:16:42.047101 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-10 09:16:42.047607 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.145) 0:01:11.407 ******* 2025-02-10 09:16:42.192812 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.193516 | orchestrator | 2025-02-10 09:16:42.193843 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-10 09:16:42.197074 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.148) 0:01:11.555 ******* 2025-02-10 09:16:42.335467 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.338770 | orchestrator | 2025-02-10 09:16:42.339664 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-10 09:16:42.340777 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.142) 0:01:11.698 ******* 2025-02-10 09:16:42.489475 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.489667 | orchestrator | 2025-02-10 09:16:42.490300 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-10 09:16:42.491000 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.153) 0:01:11.851 ******* 2025-02-10 09:16:42.633814 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.634411 | orchestrator | 2025-02-10 09:16:42.634663 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-10 09:16:42.635166 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.144) 0:01:11.996 ******* 2025-02-10 09:16:42.776909 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.777986 | orchestrator | 2025-02-10 09:16:42.778547 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-10 09:16:42.779637 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.141) 0:01:12.137 ******* 2025-02-10 09:16:42.919327 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:42.919825 | orchestrator | 2025-02-10 09:16:42.920668 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-10 09:16:42.921597 | orchestrator | Monday 10 February 2025 09:16:42 +0000 (0:00:00.143) 0:01:12.281 ******* 2025-02-10 09:16:43.063363 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:43.064504 | orchestrator | 2025-02-10 09:16:43.064778 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-10 09:16:43.065325 | orchestrator | Monday 10 February 2025 09:16:43 +0000 (0:00:00.143) 0:01:12.425 ******* 2025-02-10 09:16:43.417869 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:43.418146 | orchestrator | 2025-02-10 09:16:43.418650 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-10 09:16:43.419140 | orchestrator | Monday 10 February 2025 09:16:43 +0000 (0:00:00.351) 0:01:12.777 ******* 2025-02-10 09:16:43.572959 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:43.573109 | orchestrator | 2025-02-10 09:16:43.573135 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-10 09:16:43.573350 | orchestrator | Monday 10 February 2025 09:16:43 +0000 (0:00:00.157) 0:01:12.934 ******* 2025-02-10 09:16:43.745415 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:43.749323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:43.749916 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:43.749964 | orchestrator | 2025-02-10 09:16:43.750786 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-10 09:16:43.751240 | orchestrator | Monday 10 February 2025 09:16:43 +0000 (0:00:00.173) 0:01:13.108 ******* 2025-02-10 09:16:43.913514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:43.914507 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:43.916630 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:43.916742 | orchestrator | 2025-02-10 09:16:43.918101 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-10 09:16:43.919042 | orchestrator | Monday 10 February 2025 09:16:43 +0000 (0:00:00.167) 0:01:13.275 ******* 2025-02-10 09:16:44.110332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:44.110617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:44.111073 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:44.111199 | orchestrator | 2025-02-10 09:16:44.111720 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-10 09:16:44.295320 | orchestrator | Monday 10 February 2025 09:16:44 +0000 (0:00:00.196) 0:01:13.472 ******* 2025-02-10 09:16:44.295543 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:44.296087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:44.296760 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:44.298007 | orchestrator | 2025-02-10 09:16:44.300381 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-10 09:16:44.300540 | orchestrator | Monday 10 February 2025 09:16:44 +0000 (0:00:00.185) 0:01:13.657 ******* 2025-02-10 09:16:44.470392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:44.471211 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:44.471252 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:44.471273 | orchestrator | 2025-02-10 09:16:44.471663 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-10 09:16:44.472139 | orchestrator | Monday 10 February 2025 09:16:44 +0000 (0:00:00.175) 0:01:13.833 ******* 2025-02-10 09:16:44.639646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:44.640216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:44.640445 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:44.641569 | orchestrator | 2025-02-10 09:16:44.644051 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-10 09:16:44.816581 | orchestrator | Monday 10 February 2025 09:16:44 +0000 (0:00:00.169) 0:01:14.002 ******* 2025-02-10 09:16:44.816726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:44.819596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:44.819668 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:44.819787 | orchestrator | 2025-02-10 09:16:44.819815 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-10 09:16:44.819844 | orchestrator | Monday 10 February 2025 09:16:44 +0000 (0:00:00.175) 0:01:14.178 ******* 2025-02-10 09:16:44.994923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:44.995975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:44.996036 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:44.996143 | orchestrator | 2025-02-10 09:16:44.996949 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-10 09:16:44.997403 | orchestrator | Monday 10 February 2025 09:16:44 +0000 (0:00:00.179) 0:01:14.357 ******* 2025-02-10 09:16:45.505164 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:45.505808 | orchestrator | 2025-02-10 09:16:45.506131 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-10 09:16:45.506270 | orchestrator | Monday 10 February 2025 09:16:45 +0000 (0:00:00.510) 0:01:14.867 ******* 2025-02-10 09:16:46.225570 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:46.226657 | orchestrator | 2025-02-10 09:16:46.227869 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-10 09:16:46.228480 | orchestrator | Monday 10 February 2025 09:16:46 +0000 (0:00:00.718) 0:01:15.586 ******* 2025-02-10 09:16:46.388613 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:46.388813 | orchestrator | 2025-02-10 09:16:46.388843 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-10 09:16:46.584111 | orchestrator | Monday 10 February 2025 09:16:46 +0000 (0:00:00.164) 0:01:15.751 ******* 2025-02-10 09:16:46.584231 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'vg_name': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}) 2025-02-10 09:16:46.584500 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'vg_name': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'}) 2025-02-10 09:16:46.584994 | orchestrator | 2025-02-10 09:16:46.585882 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-10 09:16:46.586154 | orchestrator | Monday 10 February 2025 09:16:46 +0000 (0:00:00.196) 0:01:15.947 ******* 2025-02-10 09:16:46.748269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:46.748506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:46.749751 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:46.750221 | orchestrator | 2025-02-10 09:16:46.750576 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-10 09:16:46.751030 | orchestrator | Monday 10 February 2025 09:16:46 +0000 (0:00:00.163) 0:01:16.111 ******* 2025-02-10 09:16:46.932105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:46.932274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:46.932288 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:46.932298 | orchestrator | 2025-02-10 09:16:46.932306 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-10 09:16:46.932319 | orchestrator | Monday 10 February 2025 09:16:46 +0000 (0:00:00.181) 0:01:16.293 ******* 2025-02-10 09:16:47.099222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'})  2025-02-10 09:16:47.099930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'})  2025-02-10 09:16:47.100658 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:47.101914 | orchestrator | 2025-02-10 09:16:47.103073 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-10 09:16:47.103609 | orchestrator | Monday 10 February 2025 09:16:47 +0000 (0:00:00.168) 0:01:16.461 ******* 2025-02-10 09:16:47.529039 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:16:47.529705 | orchestrator |  "lvm_report": { 2025-02-10 09:16:47.530344 | orchestrator |  "lv": [ 2025-02-10 09:16:47.531333 | orchestrator |  { 2025-02-10 09:16:47.532486 | orchestrator |  "lv_name": "osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b", 2025-02-10 09:16:47.533087 | orchestrator |  "vg_name": "ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b" 2025-02-10 09:16:47.534336 | orchestrator |  }, 2025-02-10 09:16:47.535410 | orchestrator |  { 2025-02-10 09:16:47.538113 | orchestrator |  "lv_name": "osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c", 2025-02-10 09:16:47.538551 | orchestrator |  "vg_name": "ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c" 2025-02-10 09:16:47.539205 | orchestrator |  } 2025-02-10 09:16:47.539918 | orchestrator |  ], 2025-02-10 09:16:47.540597 | orchestrator |  "pv": [ 2025-02-10 09:16:47.541001 | orchestrator |  { 2025-02-10 09:16:47.542443 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-10 09:16:47.542641 | orchestrator |  "vg_name": "ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c" 2025-02-10 09:16:47.543134 | orchestrator |  }, 2025-02-10 09:16:47.543636 | orchestrator |  { 2025-02-10 09:16:47.544849 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-10 09:16:47.545331 | orchestrator |  "vg_name": "ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b" 2025-02-10 09:16:47.546006 | orchestrator |  } 2025-02-10 09:16:47.546405 | orchestrator |  ] 2025-02-10 09:16:47.547107 | orchestrator |  } 2025-02-10 09:16:47.547745 | orchestrator | } 2025-02-10 09:16:47.548249 | orchestrator | 2025-02-10 09:16:47.549019 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:16:47.549477 | orchestrator | 2025-02-10 09:16:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:16:47.550156 | orchestrator | 2025-02-10 09:16:47 | INFO  | Please wait and do not abort execution. 2025-02-10 09:16:47.550650 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-10 09:16:47.551512 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-10 09:16:47.551893 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-10 09:16:47.551924 | orchestrator | 2025-02-10 09:16:47.552373 | orchestrator | 2025-02-10 09:16:47.552879 | orchestrator | 2025-02-10 09:16:47.553104 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:16:47.553736 | orchestrator | Monday 10 February 2025 09:16:47 +0000 (0:00:00.429) 0:01:16.891 ******* 2025-02-10 09:16:47.554539 | orchestrator | =============================================================================== 2025-02-10 09:16:47.554862 | orchestrator | Create block VGs -------------------------------------------------------- 5.56s 2025-02-10 09:16:47.555343 | orchestrator | Create block LVs -------------------------------------------------------- 4.13s 2025-02-10 09:16:47.555769 | orchestrator | Print LVM report data --------------------------------------------------- 2.02s 2025-02-10 09:16:47.556161 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.98s 2025-02-10 09:16:47.556621 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.81s 2025-02-10 09:16:47.556982 | orchestrator | Add known links to the list of available block devices ------------------ 1.63s 2025-02-10 09:16:47.557474 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.63s 2025-02-10 09:16:47.557613 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2025-02-10 09:16:47.557937 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.51s 2025-02-10 09:16:47.558412 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2025-02-10 09:16:47.558894 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.10s 2025-02-10 09:16:47.559386 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-02-10 09:16:47.559596 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-02-10 09:16:47.559984 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.85s 2025-02-10 09:16:47.560372 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.74s 2025-02-10 09:16:47.561026 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.73s 2025-02-10 09:16:47.561403 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-02-10 09:16:47.561852 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2025-02-10 09:16:47.562214 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.70s 2025-02-10 09:16:47.562563 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-02-10 09:16:49.632885 | orchestrator | 2025-02-10 09:16:49 | INFO  | Task a81633f6-c90a-415e-bea7-93acd9430b6f (facts) was prepared for execution. 2025-02-10 09:16:52.918001 | orchestrator | 2025-02-10 09:16:49 | INFO  | It takes a moment until task a81633f6-c90a-415e-bea7-93acd9430b6f (facts) has been started and output is visible here. 2025-02-10 09:16:52.918220 | orchestrator | 2025-02-10 09:16:52.918882 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-10 09:16:52.918920 | orchestrator | 2025-02-10 09:16:52.921979 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:16:52.922293 | orchestrator | Monday 10 February 2025 09:16:52 +0000 (0:00:00.210) 0:00:00.210 ******* 2025-02-10 09:16:54.015990 | orchestrator | ok: [testbed-manager] 2025-02-10 09:16:54.016701 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:16:54.017282 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:16:54.020902 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:16:54.021763 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:16:54.021804 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:16:54.022547 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:16:54.024914 | orchestrator | 2025-02-10 09:16:54.175134 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:16:54.175272 | orchestrator | Monday 10 February 2025 09:16:54 +0000 (0:00:01.095) 0:00:01.306 ******* 2025-02-10 09:16:54.175308 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:16:54.254555 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:16:54.332745 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:16:54.412698 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:16:54.483416 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:16:55.237157 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:16:55.239180 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:16:55.239529 | orchestrator | 2025-02-10 09:16:55.240884 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:16:55.241225 | orchestrator | 2025-02-10 09:16:55.241894 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:16:55.242648 | orchestrator | Monday 10 February 2025 09:16:55 +0000 (0:00:01.225) 0:00:02.531 ******* 2025-02-10 09:17:00.130833 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:17:00.131527 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:17:00.131578 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:17:00.135035 | orchestrator | ok: [testbed-manager] 2025-02-10 09:17:00.135835 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:17:00.135921 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:17:00.135947 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:17:00.135968 | orchestrator | 2025-02-10 09:17:00.136053 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:17:00.136152 | orchestrator | 2025-02-10 09:17:00.136640 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:17:00.137141 | orchestrator | Monday 10 February 2025 09:17:00 +0000 (0:00:04.894) 0:00:07.426 ******* 2025-02-10 09:17:00.287061 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:17:00.368091 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:17:00.447661 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:17:00.524772 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:17:00.601676 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:17:00.645145 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:17:00.645335 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:17:00.647111 | orchestrator | 2025-02-10 09:17:00.648465 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:17:00.648520 | orchestrator | 2025-02-10 09:17:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:17:00.650117 | orchestrator | 2025-02-10 09:17:00 | INFO  | Please wait and do not abort execution. 2025-02-10 09:17:00.650156 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.651297 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.651870 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.652955 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.653705 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.654604 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.655846 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:17:00.655974 | orchestrator | 2025-02-10 09:17:00.656488 | orchestrator | 2025-02-10 09:17:00.656630 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:17:00.657262 | orchestrator | Monday 10 February 2025 09:17:00 +0000 (0:00:00.514) 0:00:07.941 ******* 2025-02-10 09:17:00.657484 | orchestrator | =============================================================================== 2025-02-10 09:17:00.657961 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.89s 2025-02-10 09:17:00.658430 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-02-10 09:17:00.659042 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-02-10 09:17:00.659639 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-02-10 09:17:01.220305 | orchestrator | 2025-02-10 09:17:01.224430 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Feb 10 09:17:01 UTC 2025 2025-02-10 09:17:02.631099 | orchestrator | 2025-02-10 09:17:02.631221 | orchestrator | 2025-02-10 09:17:02 | INFO  | Collection nutshell is prepared for execution 2025-02-10 09:17:02.635253 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [0] - dotfiles 2025-02-10 09:17:02.635322 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [0] - homer 2025-02-10 09:17:02.636523 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [0] - netdata 2025-02-10 09:17:02.636561 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [0] - openstackclient 2025-02-10 09:17:02.636570 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [0] - phpmyadmin 2025-02-10 09:17:02.636579 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [0] - common 2025-02-10 09:17:02.636593 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [1] -- loadbalancer 2025-02-10 09:17:02.636772 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [2] --- opensearch 2025-02-10 09:17:02.636787 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [2] --- mariadb-ng 2025-02-10 09:17:02.636795 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [3] ---- horizon 2025-02-10 09:17:02.636807 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [3] ---- keystone 2025-02-10 09:17:02.637148 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [4] ----- neutron 2025-02-10 09:17:02.637180 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [5] ------ wait-for-nova 2025-02-10 09:17:02.637191 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [5] ------ octavia 2025-02-10 09:17:02.637204 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [4] ----- barbican 2025-02-10 09:17:02.637267 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [4] ----- designate 2025-02-10 09:17:02.637281 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [4] ----- ironic 2025-02-10 09:17:02.637640 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [4] ----- placement 2025-02-10 09:17:02.637669 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [4] ----- magnum 2025-02-10 09:17:02.637684 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [1] -- openvswitch 2025-02-10 09:17:02.637725 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [2] --- ovn 2025-02-10 09:17:02.638154 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [1] -- memcached 2025-02-10 09:17:02.638184 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [1] -- redis 2025-02-10 09:17:02.638213 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [1] -- rabbitmq-ng 2025-02-10 09:17:02.638222 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [0] - kubernetes 2025-02-10 09:17:02.638230 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [1] -- kubeconfig 2025-02-10 09:17:02.638238 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [1] -- copy-kubeconfig 2025-02-10 09:17:02.638250 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [0] - ceph 2025-02-10 09:17:02.639535 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [1] -- ceph-pools 2025-02-10 09:17:02.639681 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [2] --- copy-ceph-keys 2025-02-10 09:17:02.639698 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [3] ---- cephclient 2025-02-10 09:17:02.639710 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-02-10 09:17:02.640041 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [4] ----- wait-for-keystone 2025-02-10 09:17:02.640060 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [5] ------ kolla-ceph-rgw 2025-02-10 09:17:02.640072 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [5] ------ glance 2025-02-10 09:17:02.640474 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [5] ------ cinder 2025-02-10 09:17:02.640504 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [5] ------ nova 2025-02-10 09:17:02.640518 | orchestrator | 2025-02-10 09:17:02 | INFO  | A [4] ----- prometheus 2025-02-10 09:17:02.772197 | orchestrator | 2025-02-10 09:17:02 | INFO  | D [5] ------ grafana 2025-02-10 09:17:02.772355 | orchestrator | 2025-02-10 09:17:02 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-02-10 09:17:04.604726 | orchestrator | 2025-02-10 09:17:02 | INFO  | Tasks are running in the background 2025-02-10 09:17:04.604911 | orchestrator | 2025-02-10 09:17:04 | INFO  | No task IDs specified, wait for all currently running tasks 2025-02-10 09:17:06.720606 | orchestrator | 2025-02-10 09:17:06 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state STARTED 2025-02-10 09:17:06.720804 | orchestrator | 2025-02-10 09:17:06 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:06.721102 | orchestrator | 2025-02-10 09:17:06 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:06.724136 | orchestrator | 2025-02-10 09:17:06 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:06.725055 | orchestrator | 2025-02-10 09:17:06 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:09.761215 | orchestrator | 2025-02-10 09:17:06 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:09.761352 | orchestrator | 2025-02-10 09:17:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:09.761392 | orchestrator | 2025-02-10 09:17:09 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state STARTED 2025-02-10 09:17:09.761502 | orchestrator | 2025-02-10 09:17:09 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:09.761524 | orchestrator | 2025-02-10 09:17:09 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:09.761544 | orchestrator | 2025-02-10 09:17:09 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:09.764963 | orchestrator | 2025-02-10 09:17:09 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:09.766065 | orchestrator | 2025-02-10 09:17:09 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:12.822081 | orchestrator | 2025-02-10 09:17:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:12.822232 | orchestrator | 2025-02-10 09:17:12 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state STARTED 2025-02-10 09:17:12.824353 | orchestrator | 2025-02-10 09:17:12 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:12.824638 | orchestrator | 2025-02-10 09:17:12 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:12.825169 | orchestrator | 2025-02-10 09:17:12 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:12.825627 | orchestrator | 2025-02-10 09:17:12 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:12.829686 | orchestrator | 2025-02-10 09:17:12 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:15.908636 | orchestrator | 2025-02-10 09:17:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:15.908776 | orchestrator | 2025-02-10 09:17:15 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state STARTED 2025-02-10 09:17:15.914098 | orchestrator | 2025-02-10 09:17:15 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:15.914201 | orchestrator | 2025-02-10 09:17:15 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:15.914235 | orchestrator | 2025-02-10 09:17:15 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:15.916121 | orchestrator | 2025-02-10 09:17:15 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:15.919070 | orchestrator | 2025-02-10 09:17:15 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:18.997326 | orchestrator | 2025-02-10 09:17:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:18.997500 | orchestrator | 2025-02-10 09:17:18 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state STARTED 2025-02-10 09:17:18.999658 | orchestrator | 2025-02-10 09:17:18 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:18.999689 | orchestrator | 2025-02-10 09:17:18 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:18.999697 | orchestrator | 2025-02-10 09:17:18 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:18.999704 | orchestrator | 2025-02-10 09:17:18 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:18.999717 | orchestrator | 2025-02-10 09:17:18 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:22.056386 | orchestrator | 2025-02-10 09:17:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:22.056641 | orchestrator | 2025-02-10 09:17:22 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state STARTED 2025-02-10 09:17:22.056734 | orchestrator | 2025-02-10 09:17:22 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:22.056754 | orchestrator | 2025-02-10 09:17:22 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:22.056774 | orchestrator | 2025-02-10 09:17:22 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:22.058778 | orchestrator | 2025-02-10 09:17:22 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:25.109878 | orchestrator | 2025-02-10 09:17:22 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:25.109994 | orchestrator | 2025-02-10 09:17:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:25.110066 | orchestrator | 2025-02-10 09:17:25.110076 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-02-10 09:17:25.110084 | orchestrator | 2025-02-10 09:17:25.110092 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-02-10 09:17:25.110099 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.577) 0:00:00.577 ******* 2025-02-10 09:17:25.110128 | orchestrator | changed: [testbed-manager] 2025-02-10 09:17:25.110137 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:17:25.110144 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:17:25.110151 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:17:25.110158 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:17:25.110166 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:17:25.110173 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:17:25.110180 | orchestrator | 2025-02-10 09:17:25.110187 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-02-10 09:17:25.110194 | orchestrator | Monday 10 February 2025 09:17:13 +0000 (0:00:03.204) 0:00:03.781 ******* 2025-02-10 09:17:25.110202 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-10 09:17:25.110216 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-10 09:17:25.110224 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-10 09:17:25.110231 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-10 09:17:25.110238 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-10 09:17:25.110245 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-10 09:17:25.110252 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-10 09:17:25.110259 | orchestrator | 2025-02-10 09:17:25.110266 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-02-10 09:17:25.110273 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:02.543) 0:00:06.325 ******* 2025-02-10 09:17:25.110282 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:14.200260', 'end': '2025-02-10 09:17:14.207136', 'delta': '0:00:00.006876', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110296 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:14.244753', 'end': '2025-02-10 09:17:14.249027', 'delta': '0:00:00.004274', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110304 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:14.498390', 'end': '2025-02-10 09:17:14.505686', 'delta': '0:00:00.007296', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110334 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:14.850918', 'end': '2025-02-10 09:17:14.857056', 'delta': '0:00:00.006138', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110342 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:15.106756', 'end': '2025-02-10 09:17:15.111883', 'delta': '0:00:00.005127', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110349 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:15.227740', 'end': '2025-02-10 09:17:15.233630', 'delta': '0:00:00.005890', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110360 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-10 09:17:15.303790', 'end': '2025-02-10 09:17:15.311260', 'delta': '0:00:00.007470', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-10 09:17:25.110368 | orchestrator | 2025-02-10 09:17:25.110375 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-02-10 09:17:25.110382 | orchestrator | Monday 10 February 2025 09:17:18 +0000 (0:00:02.321) 0:00:08.646 ******* 2025-02-10 09:17:25.110389 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-10 09:17:25.110397 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-10 09:17:25.110404 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-10 09:17:25.110411 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-10 09:17:25.110419 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-10 09:17:25.110431 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-10 09:17:25.110439 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-10 09:17:25.110447 | orchestrator | 2025-02-10 09:17:25.110455 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-02-10 09:17:25.110482 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:02.234) 0:00:10.881 ******* 2025-02-10 09:17:25.110490 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-02-10 09:17:25.110498 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-02-10 09:17:25.110505 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-02-10 09:17:25.110513 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-02-10 09:17:25.110521 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-02-10 09:17:25.110528 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-02-10 09:17:25.110536 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-02-10 09:17:25.110544 | orchestrator | 2025-02-10 09:17:25.110552 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:17:25.110563 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110589 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110598 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110606 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110614 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110622 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110630 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:17:25.110638 | orchestrator | 2025-02-10 09:17:25.110645 | orchestrator | 2025-02-10 09:17:25.110653 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:17:25.110661 | orchestrator | Monday 10 February 2025 09:17:23 +0000 (0:00:03.297) 0:00:14.179 ******* 2025-02-10 09:17:25.110669 | orchestrator | =============================================================================== 2025-02-10 09:17:25.110676 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.30s 2025-02-10 09:17:25.110684 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.20s 2025-02-10 09:17:25.110692 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.54s 2025-02-10 09:17:25.110700 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.32s 2025-02-10 09:17:25.110708 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.23s 2025-02-10 09:17:25.110718 | orchestrator | 2025-02-10 09:17:25 | INFO  | Task febff387-0463-4702-8e0f-f34ede903017 is in state SUCCESS 2025-02-10 09:17:25.115596 | orchestrator | 2025-02-10 09:17:25 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:25.117056 | orchestrator | 2025-02-10 09:17:25 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:25.121322 | orchestrator | 2025-02-10 09:17:25 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:25.125704 | orchestrator | 2025-02-10 09:17:25 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:25.128534 | orchestrator | 2025-02-10 09:17:25 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:28.215338 | orchestrator | 2025-02-10 09:17:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:28.215542 | orchestrator | 2025-02-10 09:17:28 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:28.221111 | orchestrator | 2025-02-10 09:17:28 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:28.221183 | orchestrator | 2025-02-10 09:17:28 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:28.221204 | orchestrator | 2025-02-10 09:17:28 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:28.221234 | orchestrator | 2025-02-10 09:17:28 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:31.269691 | orchestrator | 2025-02-10 09:17:28 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:31.270591 | orchestrator | 2025-02-10 09:17:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:31.270639 | orchestrator | 2025-02-10 09:17:31 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:31.270788 | orchestrator | 2025-02-10 09:17:31 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:31.270811 | orchestrator | 2025-02-10 09:17:31 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:31.275334 | orchestrator | 2025-02-10 09:17:31 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:31.278636 | orchestrator | 2025-02-10 09:17:31 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:31.280182 | orchestrator | 2025-02-10 09:17:31 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:34.336806 | orchestrator | 2025-02-10 09:17:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:34.336958 | orchestrator | 2025-02-10 09:17:34 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:34.338117 | orchestrator | 2025-02-10 09:17:34 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:34.340292 | orchestrator | 2025-02-10 09:17:34 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:34.343241 | orchestrator | 2025-02-10 09:17:34 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:34.345883 | orchestrator | 2025-02-10 09:17:34 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:34.358979 | orchestrator | 2025-02-10 09:17:34 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:37.425956 | orchestrator | 2025-02-10 09:17:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:37.426172 | orchestrator | 2025-02-10 09:17:37 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:37.426689 | orchestrator | 2025-02-10 09:17:37 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:37.427801 | orchestrator | 2025-02-10 09:17:37 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:37.429015 | orchestrator | 2025-02-10 09:17:37 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:37.430183 | orchestrator | 2025-02-10 09:17:37 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:37.431825 | orchestrator | 2025-02-10 09:17:37 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:40.497585 | orchestrator | 2025-02-10 09:17:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:40.497698 | orchestrator | 2025-02-10 09:17:40 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:40.499339 | orchestrator | 2025-02-10 09:17:40 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:40.505038 | orchestrator | 2025-02-10 09:17:40 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:40.512180 | orchestrator | 2025-02-10 09:17:40 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:40.513739 | orchestrator | 2025-02-10 09:17:40 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:40.514814 | orchestrator | 2025-02-10 09:17:40 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:43.581823 | orchestrator | 2025-02-10 09:17:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:43.581979 | orchestrator | 2025-02-10 09:17:43 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:43.585081 | orchestrator | 2025-02-10 09:17:43 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:43.586168 | orchestrator | 2025-02-10 09:17:43 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:43.587233 | orchestrator | 2025-02-10 09:17:43 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:43.592352 | orchestrator | 2025-02-10 09:17:43 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:46.677515 | orchestrator | 2025-02-10 09:17:43 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:46.677579 | orchestrator | 2025-02-10 09:17:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:46.677600 | orchestrator | 2025-02-10 09:17:46 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state STARTED 2025-02-10 09:17:46.689218 | orchestrator | 2025-02-10 09:17:46 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:46.695966 | orchestrator | 2025-02-10 09:17:46 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:46.696618 | orchestrator | 2025-02-10 09:17:46 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:46.711321 | orchestrator | 2025-02-10 09:17:46 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:46.718942 | orchestrator | 2025-02-10 09:17:46 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:49.762332 | orchestrator | 2025-02-10 09:17:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:49.762523 | orchestrator | 2025-02-10 09:17:49 | INFO  | Task eeeeef39-dbda-4924-8bd2-4b2edd3b8a08 is in state SUCCESS 2025-02-10 09:17:49.765981 | orchestrator | 2025-02-10 09:17:49 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:49.766114 | orchestrator | 2025-02-10 09:17:49 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:49.770314 | orchestrator | 2025-02-10 09:17:49 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:49.771899 | orchestrator | 2025-02-10 09:17:49 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:49.771994 | orchestrator | 2025-02-10 09:17:49 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:52.833426 | orchestrator | 2025-02-10 09:17:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:52.833649 | orchestrator | 2025-02-10 09:17:52 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:52.833950 | orchestrator | 2025-02-10 09:17:52 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:17:52.833988 | orchestrator | 2025-02-10 09:17:52 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:52.838699 | orchestrator | 2025-02-10 09:17:52 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:52.840049 | orchestrator | 2025-02-10 09:17:52 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:52.841741 | orchestrator | 2025-02-10 09:17:52 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:55.907597 | orchestrator | 2025-02-10 09:17:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:55.907738 | orchestrator | 2025-02-10 09:17:55 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:55.910803 | orchestrator | 2025-02-10 09:17:55 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:17:55.910871 | orchestrator | 2025-02-10 09:17:55 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:55.910884 | orchestrator | 2025-02-10 09:17:55 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:55.910898 | orchestrator | 2025-02-10 09:17:55 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:17:58.978271 | orchestrator | 2025-02-10 09:17:55 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:17:58.978367 | orchestrator | 2025-02-10 09:17:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:17:58.978398 | orchestrator | 2025-02-10 09:17:58 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:17:58.978578 | orchestrator | 2025-02-10 09:17:58 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:17:58.981722 | orchestrator | 2025-02-10 09:17:58 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:17:58.982238 | orchestrator | 2025-02-10 09:17:58 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:17:58.985074 | orchestrator | 2025-02-10 09:17:58 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:02.034316 | orchestrator | 2025-02-10 09:17:58 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:02.034483 | orchestrator | 2025-02-10 09:17:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:02.034567 | orchestrator | 2025-02-10 09:18:02 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:18:05.128274 | orchestrator | 2025-02-10 09:18:02 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:05.128408 | orchestrator | 2025-02-10 09:18:02 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:05.128429 | orchestrator | 2025-02-10 09:18:02 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:18:05.128445 | orchestrator | 2025-02-10 09:18:02 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:05.128541 | orchestrator | 2025-02-10 09:18:02 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:05.128560 | orchestrator | 2025-02-10 09:18:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:05.128639 | orchestrator | 2025-02-10 09:18:05 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:18:05.128732 | orchestrator | 2025-02-10 09:18:05 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:05.128756 | orchestrator | 2025-02-10 09:18:05 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:05.128850 | orchestrator | 2025-02-10 09:18:05 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state STARTED 2025-02-10 09:18:05.128891 | orchestrator | 2025-02-10 09:18:05 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:05.130705 | orchestrator | 2025-02-10 09:18:05 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:08.157605 | orchestrator | 2025-02-10 09:18:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:08.157694 | orchestrator | 2025-02-10 09:18:08 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:18:08.157758 | orchestrator | 2025-02-10 09:18:08 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:08.157966 | orchestrator | 2025-02-10 09:18:08 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:08.158241 | orchestrator | 2025-02-10 09:18:08 | INFO  | Task 67d602ae-f5c1-4938-b727-75a5d160e2e2 is in state SUCCESS 2025-02-10 09:18:08.159915 | orchestrator | 2025-02-10 09:18:08 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:11.201022 | orchestrator | 2025-02-10 09:18:08 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:11.201141 | orchestrator | 2025-02-10 09:18:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:11.201167 | orchestrator | 2025-02-10 09:18:11 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:18:11.202122 | orchestrator | 2025-02-10 09:18:11 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:11.202148 | orchestrator | 2025-02-10 09:18:11 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:11.202811 | orchestrator | 2025-02-10 09:18:11 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:11.203422 | orchestrator | 2025-02-10 09:18:11 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:11.206647 | orchestrator | 2025-02-10 09:18:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:14.247116 | orchestrator | 2025-02-10 09:18:14 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:18:14.248292 | orchestrator | 2025-02-10 09:18:14 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:14.249253 | orchestrator | 2025-02-10 09:18:14 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:14.250901 | orchestrator | 2025-02-10 09:18:14 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:14.252257 | orchestrator | 2025-02-10 09:18:14 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:14.252382 | orchestrator | 2025-02-10 09:18:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:17.289994 | orchestrator | 2025-02-10 09:18:17 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state STARTED 2025-02-10 09:18:17.299479 | orchestrator | 2025-02-10 09:18:17 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:17.301103 | orchestrator | 2025-02-10 09:18:17 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:17.307027 | orchestrator | 2025-02-10 09:18:17 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:17.313881 | orchestrator | 2025-02-10 09:18:17 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:20.349961 | orchestrator | 2025-02-10 09:18:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:20.350220 | orchestrator | 2025-02-10 09:18:20.350251 | orchestrator | 2025-02-10 09:18:20.350273 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-02-10 09:18:20.350305 | orchestrator | 2025-02-10 09:18:20.350326 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-02-10 09:18:20.350347 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.608) 0:00:00.608 ******* 2025-02-10 09:18:20.350368 | orchestrator | ok: [testbed-manager] => { 2025-02-10 09:18:20.350391 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-02-10 09:18:20.350412 | orchestrator | } 2025-02-10 09:18:20.350434 | orchestrator | 2025-02-10 09:18:20.350454 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-02-10 09:18:20.350475 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.456) 0:00:01.065 ******* 2025-02-10 09:18:20.350496 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.350559 | orchestrator | 2025-02-10 09:18:20.350579 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-02-10 09:18:20.350592 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:01.115) 0:00:02.180 ******* 2025-02-10 09:18:20.350604 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-02-10 09:18:20.350614 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-02-10 09:18:20.350625 | orchestrator | 2025-02-10 09:18:20.350635 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-02-10 09:18:20.350645 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:01.740) 0:00:03.920 ******* 2025-02-10 09:18:20.350655 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.350666 | orchestrator | 2025-02-10 09:18:20.350676 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-02-10 09:18:20.350686 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:02.595) 0:00:06.516 ******* 2025-02-10 09:18:20.350696 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.350707 | orchestrator | 2025-02-10 09:18:20.350717 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-02-10 09:18:20.350727 | orchestrator | Monday 10 February 2025 09:17:19 +0000 (0:00:01.724) 0:00:08.240 ******* 2025-02-10 09:18:20.350737 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-02-10 09:18:20.350747 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.350758 | orchestrator | 2025-02-10 09:18:20.350774 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-02-10 09:18:20.350785 | orchestrator | Monday 10 February 2025 09:17:45 +0000 (0:00:26.011) 0:00:34.252 ******* 2025-02-10 09:18:20.350795 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.350805 | orchestrator | 2025-02-10 09:18:20.350815 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:18:20.350825 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.350837 | orchestrator | 2025-02-10 09:18:20.350847 | orchestrator | 2025-02-10 09:18:20.350857 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:18:20.350888 | orchestrator | Monday 10 February 2025 09:17:49 +0000 (0:00:04.131) 0:00:38.383 ******* 2025-02-10 09:18:20.350898 | orchestrator | =============================================================================== 2025-02-10 09:18:20.350909 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.01s 2025-02-10 09:18:20.350919 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.13s 2025-02-10 09:18:20.350929 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.60s 2025-02-10 09:18:20.350939 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.74s 2025-02-10 09:18:20.350949 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.72s 2025-02-10 09:18:20.350965 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.12s 2025-02-10 09:18:20.350982 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.46s 2025-02-10 09:18:20.350998 | orchestrator | 2025-02-10 09:18:20.351015 | orchestrator | 2025-02-10 09:18:20.351032 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-02-10 09:18:20.351048 | orchestrator | 2025-02-10 09:18:20.351066 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-02-10 09:18:20.351082 | orchestrator | Monday 10 February 2025 09:17:09 +0000 (0:00:00.405) 0:00:00.405 ******* 2025-02-10 09:18:20.351098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-02-10 09:18:20.351117 | orchestrator | 2025-02-10 09:18:20.351132 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-02-10 09:18:20.351148 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.515) 0:00:00.921 ******* 2025-02-10 09:18:20.351165 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-02-10 09:18:20.351181 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-02-10 09:18:20.351198 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-02-10 09:18:20.351209 | orchestrator | 2025-02-10 09:18:20.351219 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-02-10 09:18:20.351230 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:01.303) 0:00:02.224 ******* 2025-02-10 09:18:20.351240 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.351250 | orchestrator | 2025-02-10 09:18:20.351260 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-02-10 09:18:20.351270 | orchestrator | Monday 10 February 2025 09:17:13 +0000 (0:00:01.285) 0:00:03.510 ******* 2025-02-10 09:18:20.351293 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-02-10 09:18:20.351304 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.351314 | orchestrator | 2025-02-10 09:18:20.351324 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-02-10 09:18:20.351334 | orchestrator | Monday 10 February 2025 09:17:53 +0000 (0:00:40.562) 0:00:44.072 ******* 2025-02-10 09:18:20.351344 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.351354 | orchestrator | 2025-02-10 09:18:20.351364 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-02-10 09:18:20.351374 | orchestrator | Monday 10 February 2025 09:17:55 +0000 (0:00:02.132) 0:00:46.205 ******* 2025-02-10 09:18:20.351384 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.351394 | orchestrator | 2025-02-10 09:18:20.351404 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-02-10 09:18:20.351415 | orchestrator | Monday 10 February 2025 09:17:57 +0000 (0:00:01.706) 0:00:47.911 ******* 2025-02-10 09:18:20.351424 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.351434 | orchestrator | 2025-02-10 09:18:20.351444 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-02-10 09:18:20.351454 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:03.665) 0:00:51.576 ******* 2025-02-10 09:18:20.351477 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.351495 | orchestrator | 2025-02-10 09:18:20.351533 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-02-10 09:18:20.351558 | orchestrator | Monday 10 February 2025 09:18:02 +0000 (0:00:01.281) 0:00:52.858 ******* 2025-02-10 09:18:20.351575 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.351592 | orchestrator | 2025-02-10 09:18:20.351610 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-02-10 09:18:20.351626 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.668) 0:00:53.526 ******* 2025-02-10 09:18:20.351642 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.351661 | orchestrator | 2025-02-10 09:18:20.351678 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:18:20.351695 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.351714 | orchestrator | 2025-02-10 09:18:20.351730 | orchestrator | 2025-02-10 09:18:20.351747 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:18:20.351764 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:00.543) 0:00:54.070 ******* 2025-02-10 09:18:20.351781 | orchestrator | =============================================================================== 2025-02-10 09:18:20.351797 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.56s 2025-02-10 09:18:20.351810 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.67s 2025-02-10 09:18:20.351820 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.13s 2025-02-10 09:18:20.351830 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.71s 2025-02-10 09:18:20.351840 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.30s 2025-02-10 09:18:20.351854 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.29s 2025-02-10 09:18:20.351871 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.28s 2025-02-10 09:18:20.351887 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.67s 2025-02-10 09:18:20.351906 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.54s 2025-02-10 09:18:20.351923 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.52s 2025-02-10 09:18:20.351940 | orchestrator | 2025-02-10 09:18:20.351957 | orchestrator | 2025-02-10 09:18:20.351974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:18:20.351990 | orchestrator | 2025-02-10 09:18:20.352008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:18:20.352025 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:00.332) 0:00:00.332 ******* 2025-02-10 09:18:20.352043 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-02-10 09:18:20.352060 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-02-10 09:18:20.352077 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-02-10 09:18:20.352096 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-02-10 09:18:20.352112 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-02-10 09:18:20.352129 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-02-10 09:18:20.352145 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-02-10 09:18:20.352162 | orchestrator | 2025-02-10 09:18:20.352195 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-02-10 09:18:20.352213 | orchestrator | 2025-02-10 09:18:20.352231 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-02-10 09:18:20.352248 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:01.438) 0:00:01.770 ******* 2025-02-10 09:18:20.352280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:18:20.352304 | orchestrator | 2025-02-10 09:18:20.352315 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-02-10 09:18:20.352325 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:02.397) 0:00:04.167 ******* 2025-02-10 09:18:20.352335 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:20.352345 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:20.352356 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:20.352366 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.352376 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:20.352395 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:20.352405 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:20.352422 | orchestrator | 2025-02-10 09:18:20.352440 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-02-10 09:18:20.352457 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:02.443) 0:00:06.611 ******* 2025-02-10 09:18:20.352474 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:20.352491 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.352562 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:20.352574 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:20.352584 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:20.352595 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:20.352605 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:20.352615 | orchestrator | 2025-02-10 09:18:20.352625 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-02-10 09:18:20.352636 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:02.923) 0:00:09.535 ******* 2025-02-10 09:18:20.352646 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.352656 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:18:20.352678 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:18:20.352689 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:18:20.352699 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:18:20.352709 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:18:20.352719 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:18:20.352729 | orchestrator | 2025-02-10 09:18:20.352739 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-02-10 09:18:20.352749 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:02.476) 0:00:12.012 ******* 2025-02-10 09:18:20.352759 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.352769 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:18:20.352779 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:18:20.352789 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:18:20.352799 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:18:20.352809 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:18:20.352819 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:18:20.352829 | orchestrator | 2025-02-10 09:18:20.352844 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-02-10 09:18:20.352854 | orchestrator | Monday 10 February 2025 09:17:30 +0000 (0:00:08.075) 0:00:20.087 ******* 2025-02-10 09:18:20.352864 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:18:20.352875 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:18:20.352885 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:18:20.352895 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:18:20.352905 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:18:20.352915 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:18:20.352925 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.352935 | orchestrator | 2025-02-10 09:18:20.352946 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-02-10 09:18:20.352956 | orchestrator | Monday 10 February 2025 09:17:48 +0000 (0:00:17.196) 0:00:37.283 ******* 2025-02-10 09:18:20.352968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:18:20.352990 | orchestrator | 2025-02-10 09:18:20.353001 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-02-10 09:18:20.353011 | orchestrator | Monday 10 February 2025 09:17:50 +0000 (0:00:02.122) 0:00:39.405 ******* 2025-02-10 09:18:20.353021 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-02-10 09:18:20.353032 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-02-10 09:18:20.353042 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-02-10 09:18:20.353052 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-02-10 09:18:20.353062 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-02-10 09:18:20.353074 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-02-10 09:18:20.353092 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-02-10 09:18:20.353108 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-02-10 09:18:20.353123 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-02-10 09:18:20.353138 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-02-10 09:18:20.353154 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-02-10 09:18:20.353171 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-02-10 09:18:20.353189 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-02-10 09:18:20.353205 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-02-10 09:18:20.353219 | orchestrator | 2025-02-10 09:18:20.353230 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-02-10 09:18:20.353241 | orchestrator | Monday 10 February 2025 09:17:58 +0000 (0:00:08.059) 0:00:47.465 ******* 2025-02-10 09:18:20.353251 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.353269 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:20.353285 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:20.353302 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:20.353319 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:20.353336 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:20.353352 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:20.353369 | orchestrator | 2025-02-10 09:18:20.353386 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-02-10 09:18:20.353404 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:01.843) 0:00:49.308 ******* 2025-02-10 09:18:20.353418 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:18:20.353429 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.353440 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:18:20.353450 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:18:20.353460 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:18:20.353470 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:18:20.353480 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:18:20.353490 | orchestrator | 2025-02-10 09:18:20.353547 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-02-10 09:18:20.353569 | orchestrator | Monday 10 February 2025 09:18:02 +0000 (0:00:02.937) 0:00:52.245 ******* 2025-02-10 09:18:20.353580 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.353590 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:20.353600 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:20.353610 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:20.353620 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:20.353630 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:20.353640 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:20.353650 | orchestrator | 2025-02-10 09:18:20.353661 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-02-10 09:18:20.353671 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:03.311) 0:00:55.556 ******* 2025-02-10 09:18:20.353681 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:18:20.353691 | orchestrator | ok: [testbed-manager] 2025-02-10 09:18:20.353709 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:18:20.353719 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:18:20.353736 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:18:20.353753 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:18:20.353771 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:18:20.353789 | orchestrator | 2025-02-10 09:18:20.353807 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-02-10 09:18:20.353826 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:02.272) 0:00:57.829 ******* 2025-02-10 09:18:20.353844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-02-10 09:18:20.353858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:18:20.353869 | orchestrator | 2025-02-10 09:18:20.353879 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-02-10 09:18:20.353889 | orchestrator | Monday 10 February 2025 09:18:10 +0000 (0:00:01.525) 0:00:59.354 ******* 2025-02-10 09:18:20.353900 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.353910 | orchestrator | 2025-02-10 09:18:20.353920 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-02-10 09:18:20.353930 | orchestrator | Monday 10 February 2025 09:18:13 +0000 (0:00:03.288) 0:01:02.643 ******* 2025-02-10 09:18:20.353940 | orchestrator | changed: [testbed-manager] 2025-02-10 09:18:20.353950 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:18:20.353960 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:18:20.353971 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:18:20.353990 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:18:20.354001 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:18:20.354012 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:18:20.354066 | orchestrator | 2025-02-10 09:18:20.354076 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:18:20.354087 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354098 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354108 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354125 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354135 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354145 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354155 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:18:20.354165 | orchestrator | 2025-02-10 09:18:20.354176 | orchestrator | 2025-02-10 09:18:20.354186 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:18:20.354196 | orchestrator | Monday 10 February 2025 09:18:16 +0000 (0:00:03.599) 0:01:06.243 ******* 2025-02-10 09:18:20.354207 | orchestrator | =============================================================================== 2025-02-10 09:18:20.354217 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.20s 2025-02-10 09:18:20.354228 | orchestrator | osism.services.netdata : Add repository --------------------------------- 8.08s 2025-02-10 09:18:20.354238 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.06s 2025-02-10 09:18:20.354256 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.60s 2025-02-10 09:18:20.354266 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.31s 2025-02-10 09:18:20.354276 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.29s 2025-02-10 09:18:20.354286 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.94s 2025-02-10 09:18:20.354296 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.92s 2025-02-10 09:18:20.354306 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.48s 2025-02-10 09:18:20.354320 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.44s 2025-02-10 09:18:20.354330 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.40s 2025-02-10 09:18:20.354348 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.27s 2025-02-10 09:18:23.405328 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.12s 2025-02-10 09:18:23.405465 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.84s 2025-02-10 09:18:23.405483 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.53s 2025-02-10 09:18:23.405497 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.44s 2025-02-10 09:18:23.405588 | orchestrator | 2025-02-10 09:18:20 | INFO  | Task c58fd708-f491-438b-afd5-24727bb2cff8 is in state SUCCESS 2025-02-10 09:18:23.405605 | orchestrator | 2025-02-10 09:18:20 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:23.405642 | orchestrator | 2025-02-10 09:18:20 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:23.405656 | orchestrator | 2025-02-10 09:18:20 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:23.405668 | orchestrator | 2025-02-10 09:18:20 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:23.405681 | orchestrator | 2025-02-10 09:18:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:23.405713 | orchestrator | 2025-02-10 09:18:23 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:23.409737 | orchestrator | 2025-02-10 09:18:23 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:23.410692 | orchestrator | 2025-02-10 09:18:23 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:23.412380 | orchestrator | 2025-02-10 09:18:23 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:23.412559 | orchestrator | 2025-02-10 09:18:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:26.477741 | orchestrator | 2025-02-10 09:18:26 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:26.477998 | orchestrator | 2025-02-10 09:18:26 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:26.478373 | orchestrator | 2025-02-10 09:18:26 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:26.478419 | orchestrator | 2025-02-10 09:18:26 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:29.539442 | orchestrator | 2025-02-10 09:18:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:29.539627 | orchestrator | 2025-02-10 09:18:29 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:29.539688 | orchestrator | 2025-02-10 09:18:29 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:29.539982 | orchestrator | 2025-02-10 09:18:29 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:29.541030 | orchestrator | 2025-02-10 09:18:29 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state STARTED 2025-02-10 09:18:32.574445 | orchestrator | 2025-02-10 09:18:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:32.574659 | orchestrator | 2025-02-10 09:18:32 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:32.574747 | orchestrator | 2025-02-10 09:18:32 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:32.575311 | orchestrator | 2025-02-10 09:18:32 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:32.575709 | orchestrator | 2025-02-10 09:18:32 | INFO  | Task 148f25b5-4ea1-4b4e-8606-32041f2644cc is in state SUCCESS 2025-02-10 09:18:32.575952 | orchestrator | 2025-02-10 09:18:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:35.630149 | orchestrator | 2025-02-10 09:18:35 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:38.676255 | orchestrator | 2025-02-10 09:18:35 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:38.676421 | orchestrator | 2025-02-10 09:18:35 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:38.676450 | orchestrator | 2025-02-10 09:18:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:38.676494 | orchestrator | 2025-02-10 09:18:38 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:38.677162 | orchestrator | 2025-02-10 09:18:38 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:38.677228 | orchestrator | 2025-02-10 09:18:38 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:41.716809 | orchestrator | 2025-02-10 09:18:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:41.717013 | orchestrator | 2025-02-10 09:18:41 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:41.717192 | orchestrator | 2025-02-10 09:18:41 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:41.717243 | orchestrator | 2025-02-10 09:18:41 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:44.762369 | orchestrator | 2025-02-10 09:18:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:44.762576 | orchestrator | 2025-02-10 09:18:44 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:44.767622 | orchestrator | 2025-02-10 09:18:44 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:44.770205 | orchestrator | 2025-02-10 09:18:44 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:44.770432 | orchestrator | 2025-02-10 09:18:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:47.817879 | orchestrator | 2025-02-10 09:18:47 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:47.818177 | orchestrator | 2025-02-10 09:18:47 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:47.818785 | orchestrator | 2025-02-10 09:18:47 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:47.818853 | orchestrator | 2025-02-10 09:18:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:50.865355 | orchestrator | 2025-02-10 09:18:50 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:53.921170 | orchestrator | 2025-02-10 09:18:50 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:53.921282 | orchestrator | 2025-02-10 09:18:50 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:53.921296 | orchestrator | 2025-02-10 09:18:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:53.921323 | orchestrator | 2025-02-10 09:18:53 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:53.921743 | orchestrator | 2025-02-10 09:18:53 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:53.922643 | orchestrator | 2025-02-10 09:18:53 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:18:53.923135 | orchestrator | 2025-02-10 09:18:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:18:56.976138 | orchestrator | 2025-02-10 09:18:56 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:18:56.976372 | orchestrator | 2025-02-10 09:18:56 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:18:56.977762 | orchestrator | 2025-02-10 09:18:56 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:00.025217 | orchestrator | 2025-02-10 09:18:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:00.025413 | orchestrator | 2025-02-10 09:19:00 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:00.025513 | orchestrator | 2025-02-10 09:19:00 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:00.025604 | orchestrator | 2025-02-10 09:19:00 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:03.062673 | orchestrator | 2025-02-10 09:19:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:03.062840 | orchestrator | 2025-02-10 09:19:03 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:03.063211 | orchestrator | 2025-02-10 09:19:03 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:06.090716 | orchestrator | 2025-02-10 09:19:03 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:06.090842 | orchestrator | 2025-02-10 09:19:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:06.090871 | orchestrator | 2025-02-10 09:19:06 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:06.091627 | orchestrator | 2025-02-10 09:19:06 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:06.092797 | orchestrator | 2025-02-10 09:19:06 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:09.132487 | orchestrator | 2025-02-10 09:19:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:09.132701 | orchestrator | 2025-02-10 09:19:09 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:09.135851 | orchestrator | 2025-02-10 09:19:09 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:09.135890 | orchestrator | 2025-02-10 09:19:09 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:12.185996 | orchestrator | 2025-02-10 09:19:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:12.186216 | orchestrator | 2025-02-10 09:19:12 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:15.257641 | orchestrator | 2025-02-10 09:19:12 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:15.257824 | orchestrator | 2025-02-10 09:19:12 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:15.257846 | orchestrator | 2025-02-10 09:19:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:15.257882 | orchestrator | 2025-02-10 09:19:15 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:15.257967 | orchestrator | 2025-02-10 09:19:15 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:15.258648 | orchestrator | 2025-02-10 09:19:15 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:18.315743 | orchestrator | 2025-02-10 09:19:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:18.315914 | orchestrator | 2025-02-10 09:19:18 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:18.316506 | orchestrator | 2025-02-10 09:19:18 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:18.320253 | orchestrator | 2025-02-10 09:19:18 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state STARTED 2025-02-10 09:19:21.370391 | orchestrator | 2025-02-10 09:19:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:21.370619 | orchestrator | 2025-02-10 09:19:21 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:21.373313 | orchestrator | 2025-02-10 09:19:21 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:21.375961 | orchestrator | 2025-02-10 09:19:21 | INFO  | Task 14997054-4fe3-47e2-b5f6-f96a2e33a2f8 is in state SUCCESS 2025-02-10 09:19:21.378345 | orchestrator | 2025-02-10 09:19:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:21.378407 | orchestrator | 2025-02-10 09:19:21.378424 | orchestrator | 2025-02-10 09:19:21.378438 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-02-10 09:19:21.378453 | orchestrator | 2025-02-10 09:19:21.378467 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-02-10 09:19:21.378481 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.246) 0:00:00.246 ******* 2025-02-10 09:19:21.378495 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:21.378511 | orchestrator | 2025-02-10 09:19:21.378525 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-02-10 09:19:21.378539 | orchestrator | Monday 10 February 2025 09:17:30 +0000 (0:00:00.872) 0:00:01.118 ******* 2025-02-10 09:19:21.378585 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-02-10 09:19:21.378599 | orchestrator | 2025-02-10 09:19:21.378613 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-02-10 09:19:21.378627 | orchestrator | Monday 10 February 2025 09:17:31 +0000 (0:00:00.854) 0:00:01.972 ******* 2025-02-10 09:19:21.378643 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.378667 | orchestrator | 2025-02-10 09:19:21.378687 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-02-10 09:19:21.378711 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:02.070) 0:00:04.043 ******* 2025-02-10 09:19:21.378734 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-02-10 09:19:21.378759 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:21.378774 | orchestrator | 2025-02-10 09:19:21.378788 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-02-10 09:19:21.378802 | orchestrator | Monday 10 February 2025 09:18:26 +0000 (0:00:52.634) 0:00:56.677 ******* 2025-02-10 09:19:21.378816 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.378830 | orchestrator | 2025-02-10 09:19:21.378844 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:19:21.378880 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:19:21.378896 | orchestrator | 2025-02-10 09:19:21.378910 | orchestrator | 2025-02-10 09:19:21.378924 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:19:21.378938 | orchestrator | Monday 10 February 2025 09:18:30 +0000 (0:00:04.197) 0:01:00.875 ******* 2025-02-10 09:19:21.378954 | orchestrator | =============================================================================== 2025-02-10 09:19:21.378970 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.63s 2025-02-10 09:19:21.378986 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.20s 2025-02-10 09:19:21.379001 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.07s 2025-02-10 09:19:21.379017 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.87s 2025-02-10 09:19:21.379032 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.85s 2025-02-10 09:19:21.379050 | orchestrator | 2025-02-10 09:19:21.379073 | orchestrator | 2025-02-10 09:19:21.379097 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-02-10 09:19:21.379120 | orchestrator | 2025-02-10 09:19:21.379146 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-10 09:19:21.379169 | orchestrator | Monday 10 February 2025 09:17:05 +0000 (0:00:00.322) 0:00:00.322 ******* 2025-02-10 09:19:21.379190 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:19:21.379207 | orchestrator | 2025-02-10 09:19:21.379224 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-02-10 09:19:21.379239 | orchestrator | Monday 10 February 2025 09:17:07 +0000 (0:00:01.626) 0:00:01.949 ******* 2025-02-10 09:19:21.379262 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379278 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379294 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379307 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379321 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379335 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379349 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379362 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379376 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379389 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-10 09:19:21.379403 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379417 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379432 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379446 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379459 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379473 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379499 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379519 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379568 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-10 09:19:21.379583 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379598 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-10 09:19:21.379612 | orchestrator | 2025-02-10 09:19:21.379626 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-10 09:19:21.379640 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:03.855) 0:00:05.804 ******* 2025-02-10 09:19:21.379655 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:19:21.379677 | orchestrator | 2025-02-10 09:19:21.379691 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-02-10 09:19:21.379704 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:01.580) 0:00:07.385 ******* 2025-02-10 09:19:21.379722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379756 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.379843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.379869 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.379891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.379916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.379972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.379998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380052 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.380165 | orchestrator | 2025-02-10 09:19:21.380179 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-02-10 09:19:21.380193 | orchestrator | Monday 10 February 2025 09:17:18 +0000 (0:00:05.347) 0:00:12.732 ******* 2025-02-10 09:19:21.380214 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380248 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380262 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:19:21.380277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380380 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:19:21.380394 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:19:21.380408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380487 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:19:21.380501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380515 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:19:21.380539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380620 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:19:21.380634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.380684 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:19:21.380698 | orchestrator | 2025-02-10 09:19:21.380712 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-02-10 09:19:21.380726 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:02.054) 0:00:14.786 ******* 2025-02-10 09:19:21.380739 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.380771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381260 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:19:21.381284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.381307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.381394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381423 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:19:21.381437 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:19:21.381461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.381477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.381491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381677 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:19:21.381692 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:19:21.381706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.381736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-10 09:19:21.381753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.381826 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:19:21.381841 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:19:21.381857 | orchestrator | 2025-02-10 09:19:21.381873 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-02-10 09:19:21.381889 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:02.248) 0:00:17.035 ******* 2025-02-10 09:19:21.381904 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:19:21.381919 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:19:21.381936 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:19:21.381952 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:19:21.381967 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:19:21.381983 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:19:21.381998 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:19:21.382013 | orchestrator | 2025-02-10 09:19:21.382065 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-02-10 09:19:21.382081 | orchestrator | Monday 10 February 2025 09:17:23 +0000 (0:00:00.945) 0:00:17.980 ******* 2025-02-10 09:19:21.382096 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:19:21.382109 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:19:21.382122 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:19:21.382134 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:19:21.382146 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:19:21.382158 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:19:21.382170 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:19:21.382182 | orchestrator | 2025-02-10 09:19:21.382195 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-02-10 09:19:21.382207 | orchestrator | Monday 10 February 2025 09:17:24 +0000 (0:00:01.045) 0:00:19.026 ******* 2025-02-10 09:19:21.382219 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:21.382231 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.382244 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.382256 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.382268 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.382280 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.382292 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.382304 | orchestrator | 2025-02-10 09:19:21.382316 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-02-10 09:19:21.382328 | orchestrator | Monday 10 February 2025 09:17:55 +0000 (0:00:31.354) 0:00:50.381 ******* 2025-02-10 09:19:21.382341 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:21.382355 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:21.382378 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:21.382398 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:21.382421 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:21.382443 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:21.382459 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:21.382472 | orchestrator | 2025-02-10 09:19:21.382484 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-10 09:19:21.382504 | orchestrator | Monday 10 February 2025 09:17:59 +0000 (0:00:03.154) 0:00:53.536 ******* 2025-02-10 09:19:21.382516 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:21.382536 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:21.382571 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:21.382584 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:21.382596 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:21.382615 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:21.382628 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:21.382640 | orchestrator | 2025-02-10 09:19:21.382653 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-02-10 09:19:21.382665 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:01.069) 0:00:54.605 ******* 2025-02-10 09:19:21.382677 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:19:21.382690 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:19:21.382702 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:19:21.382714 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:19:21.382726 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:19:21.382738 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:19:21.382750 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:19:21.382763 | orchestrator | 2025-02-10 09:19:21.382775 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-10 09:19:21.382787 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:01.253) 0:00:55.859 ******* 2025-02-10 09:19:21.382800 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:19:21.382812 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:19:21.382845 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:19:21.382857 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:19:21.382870 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:19:21.382946 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:19:21.382962 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:19:21.382980 | orchestrator | 2025-02-10 09:19:21.383002 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-02-10 09:19:21.383024 | orchestrator | Monday 10 February 2025 09:18:02 +0000 (0:00:01.188) 0:00:57.048 ******* 2025-02-10 09:19:21.383063 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383113 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383196 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.383245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.383406 | orchestrator | 2025-02-10 09:19:21.383418 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-02-10 09:19:21.383431 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:05.628) 0:01:02.677 ******* 2025-02-10 09:19:21.383443 | orchestrator | [WARNING]: Skipped 2025-02-10 09:19:21.383456 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-02-10 09:19:21.383468 | orchestrator | to this access issue: 2025-02-10 09:19:21.383481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-02-10 09:19:21.383493 | orchestrator | directory 2025-02-10 09:19:21.383505 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:19:21.383517 | orchestrator | 2025-02-10 09:19:21.383530 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-02-10 09:19:21.383562 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.634) 0:01:03.312 ******* 2025-02-10 09:19:21.383575 | orchestrator | [WARNING]: Skipped 2025-02-10 09:19:21.383588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-02-10 09:19:21.383600 | orchestrator | to this access issue: 2025-02-10 09:19:21.383613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-02-10 09:19:21.383625 | orchestrator | directory 2025-02-10 09:19:21.383638 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:19:21.383650 | orchestrator | 2025-02-10 09:19:21.383662 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-02-10 09:19:21.383675 | orchestrator | Monday 10 February 2025 09:18:09 +0000 (0:00:00.672) 0:01:03.984 ******* 2025-02-10 09:19:21.383687 | orchestrator | [WARNING]: Skipped 2025-02-10 09:19:21.383699 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-02-10 09:19:21.383712 | orchestrator | to this access issue: 2025-02-10 09:19:21.383724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-02-10 09:19:21.383737 | orchestrator | directory 2025-02-10 09:19:21.383749 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:19:21.383761 | orchestrator | 2025-02-10 09:19:21.383773 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-02-10 09:19:21.383786 | orchestrator | Monday 10 February 2025 09:18:10 +0000 (0:00:00.637) 0:01:04.622 ******* 2025-02-10 09:19:21.383798 | orchestrator | [WARNING]: Skipped 2025-02-10 09:19:21.383810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-02-10 09:19:21.383823 | orchestrator | to this access issue: 2025-02-10 09:19:21.383835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-02-10 09:19:21.383847 | orchestrator | directory 2025-02-10 09:19:21.383859 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:19:21.383878 | orchestrator | 2025-02-10 09:19:21.383891 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-02-10 09:19:21.383903 | orchestrator | Monday 10 February 2025 09:18:10 +0000 (0:00:00.631) 0:01:05.253 ******* 2025-02-10 09:19:21.383916 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.383928 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.383940 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.383952 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.383964 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.383976 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.383988 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.384001 | orchestrator | 2025-02-10 09:19:21.384013 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-02-10 09:19:21.384025 | orchestrator | Monday 10 February 2025 09:18:16 +0000 (0:00:05.698) 0:01:10.952 ******* 2025-02-10 09:19:21.384038 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384064 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384089 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384101 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384113 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-10 09:19:21.384125 | orchestrator | 2025-02-10 09:19:21.384138 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-02-10 09:19:21.384150 | orchestrator | Monday 10 February 2025 09:18:19 +0000 (0:00:03.367) 0:01:14.319 ******* 2025-02-10 09:19:21.384162 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.384175 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.384187 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.384199 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.384211 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.384224 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.384236 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.384248 | orchestrator | 2025-02-10 09:19:21.384261 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-02-10 09:19:21.384273 | orchestrator | Monday 10 February 2025 09:18:21 +0000 (0:00:01.931) 0:01:16.250 ******* 2025-02-10 09:19:21.384292 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384310 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384355 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384381 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384399 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384435 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384448 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384461 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384487 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384500 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384531 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384577 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:19:21.384609 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384622 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384634 | orchestrator | 2025-02-10 09:19:21.384647 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-02-10 09:19:21.384663 | orchestrator | Monday 10 February 2025 09:18:24 +0000 (0:00:02.369) 0:01:18.619 ******* 2025-02-10 09:19:21.384676 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384689 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384701 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384726 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384738 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384750 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-10 09:19:21.384762 | orchestrator | 2025-02-10 09:19:21.384774 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-02-10 09:19:21.384787 | orchestrator | Monday 10 February 2025 09:18:27 +0000 (0:00:03.389) 0:01:22.009 ******* 2025-02-10 09:19:21.384799 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384811 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384836 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384854 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384872 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384885 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-10 09:19:21.384897 | orchestrator | 2025-02-10 09:19:21.384909 | orchestrator | TASK [common : Check common containers] **************************************** 2025-02-10 09:19:21.384922 | orchestrator | Monday 10 February 2025 09:18:31 +0000 (0:00:03.764) 0:01:25.773 ******* 2025-02-10 09:19:21.384934 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.384973 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.384985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.385028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.385054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.385066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-10 09:19:21.385079 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:19:21.385289 | orchestrator | 2025-02-10 09:19:21.385301 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-02-10 09:19:21.385313 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:03.167) 0:01:28.941 ******* 2025-02-10 09:19:21.385325 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.385338 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.385350 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.385362 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.385374 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.385386 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.385398 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.385410 | orchestrator | 2025-02-10 09:19:21.385422 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-02-10 09:19:21.385435 | orchestrator | Monday 10 February 2025 09:18:36 +0000 (0:00:01.609) 0:01:30.551 ******* 2025-02-10 09:19:21.385447 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.385459 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.385476 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.385489 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.385501 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.385513 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.385525 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.385537 | orchestrator | 2025-02-10 09:19:21.385566 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385585 | orchestrator | Monday 10 February 2025 09:18:37 +0000 (0:00:01.618) 0:01:32.170 ******* 2025-02-10 09:19:21.385597 | orchestrator | 2025-02-10 09:19:21.385610 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385622 | orchestrator | Monday 10 February 2025 09:18:37 +0000 (0:00:00.073) 0:01:32.243 ******* 2025-02-10 09:19:21.385634 | orchestrator | 2025-02-10 09:19:21.385646 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385658 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.224) 0:01:32.467 ******* 2025-02-10 09:19:21.385670 | orchestrator | 2025-02-10 09:19:21.385683 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385695 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.051) 0:01:32.519 ******* 2025-02-10 09:19:21.385707 | orchestrator | 2025-02-10 09:19:21.385719 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385732 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.053) 0:01:32.573 ******* 2025-02-10 09:19:21.385744 | orchestrator | 2025-02-10 09:19:21.385756 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385768 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.052) 0:01:32.625 ******* 2025-02-10 09:19:21.385781 | orchestrator | 2025-02-10 09:19:21.385793 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-10 09:19:21.385805 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.186) 0:01:32.811 ******* 2025-02-10 09:19:21.385817 | orchestrator | 2025-02-10 09:19:21.385829 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-02-10 09:19:21.385841 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.068) 0:01:32.880 ******* 2025-02-10 09:19:21.385854 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.385866 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.385878 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.385890 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.385902 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.385914 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.385926 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.385938 | orchestrator | 2025-02-10 09:19:21.385956 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-02-10 09:19:21.385968 | orchestrator | Monday 10 February 2025 09:18:47 +0000 (0:00:09.183) 0:01:42.063 ******* 2025-02-10 09:19:21.385987 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.385999 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.386011 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.386057 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.386069 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.386082 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.386094 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.386106 | orchestrator | 2025-02-10 09:19:21.386118 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-02-10 09:19:21.386131 | orchestrator | Monday 10 February 2025 09:19:07 +0000 (0:00:20.237) 0:02:02.301 ******* 2025-02-10 09:19:21.386143 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:19:21.386155 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:19:21.386168 | orchestrator | ok: [testbed-manager] 2025-02-10 09:19:21.386180 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:19:21.386192 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:19:21.386204 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:19:21.386216 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:19:21.386228 | orchestrator | 2025-02-10 09:19:21.386240 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-02-10 09:19:21.386253 | orchestrator | Monday 10 February 2025 09:19:10 +0000 (0:00:02.616) 0:02:04.918 ******* 2025-02-10 09:19:21.386265 | orchestrator | changed: [testbed-manager] 2025-02-10 09:19:21.386277 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:19:21.386289 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:19:21.386301 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:19:21.386313 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:19:21.386325 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:19:21.386337 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:19:21.386349 | orchestrator | 2025-02-10 09:19:21.386362 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:19:21.386374 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386387 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386399 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386412 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386424 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386436 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386449 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:19:21.386461 | orchestrator | 2025-02-10 09:19:21.386473 | orchestrator | 2025-02-10 09:19:21.386486 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:19:21.386498 | orchestrator | Monday 10 February 2025 09:19:20 +0000 (0:00:09.729) 0:02:14.647 ******* 2025-02-10 09:19:21.386516 | orchestrator | =============================================================================== 2025-02-10 09:19:24.436045 | orchestrator | common : Ensure fluentd image is present for label check --------------- 31.36s 2025-02-10 09:19:24.436190 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 20.24s 2025-02-10 09:19:24.436213 | orchestrator | common : Restart cron container ----------------------------------------- 9.73s 2025-02-10 09:19:24.436265 | orchestrator | common : Restart fluentd container -------------------------------------- 9.18s 2025-02-10 09:19:24.436280 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.70s 2025-02-10 09:19:24.436294 | orchestrator | common : Copying over config.json files for services -------------------- 5.63s 2025-02-10 09:19:24.436308 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.35s 2025-02-10 09:19:24.436323 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.86s 2025-02-10 09:19:24.436337 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.76s 2025-02-10 09:19:24.436351 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.39s 2025-02-10 09:19:24.436365 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.37s 2025-02-10 09:19:24.436379 | orchestrator | common : Check common containers ---------------------------------------- 3.17s 2025-02-10 09:19:24.436393 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.15s 2025-02-10 09:19:24.436407 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.62s 2025-02-10 09:19:24.436421 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.37s 2025-02-10 09:19:24.436435 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.25s 2025-02-10 09:19:24.436449 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.05s 2025-02-10 09:19:24.436463 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.93s 2025-02-10 09:19:24.436477 | orchestrator | common : include_tasks -------------------------------------------------- 1.63s 2025-02-10 09:19:24.436492 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.62s 2025-02-10 09:19:24.436525 | orchestrator | 2025-02-10 09:19:24 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:24.436777 | orchestrator | 2025-02-10 09:19:24 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:24.436915 | orchestrator | 2025-02-10 09:19:24 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:24.437676 | orchestrator | 2025-02-10 09:19:24 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:24.438337 | orchestrator | 2025-02-10 09:19:24 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:24.439193 | orchestrator | 2025-02-10 09:19:24 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:24.439841 | orchestrator | 2025-02-10 09:19:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:27.475487 | orchestrator | 2025-02-10 09:19:27 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:27.475911 | orchestrator | 2025-02-10 09:19:27 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:27.475946 | orchestrator | 2025-02-10 09:19:27 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:27.475968 | orchestrator | 2025-02-10 09:19:27 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:27.476491 | orchestrator | 2025-02-10 09:19:27 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:27.480602 | orchestrator | 2025-02-10 09:19:27 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:30.524228 | orchestrator | 2025-02-10 09:19:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:30.524357 | orchestrator | 2025-02-10 09:19:30 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:30.529013 | orchestrator | 2025-02-10 09:19:30 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:30.529672 | orchestrator | 2025-02-10 09:19:30 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:30.530418 | orchestrator | 2025-02-10 09:19:30 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:30.531325 | orchestrator | 2025-02-10 09:19:30 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:30.536190 | orchestrator | 2025-02-10 09:19:30 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:33.587450 | orchestrator | 2025-02-10 09:19:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:33.587664 | orchestrator | 2025-02-10 09:19:33 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:33.589541 | orchestrator | 2025-02-10 09:19:33 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:33.589625 | orchestrator | 2025-02-10 09:19:33 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:33.590515 | orchestrator | 2025-02-10 09:19:33 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:33.591380 | orchestrator | 2025-02-10 09:19:33 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:33.600364 | orchestrator | 2025-02-10 09:19:33 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:36.649918 | orchestrator | 2025-02-10 09:19:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:36.650197 | orchestrator | 2025-02-10 09:19:36 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:36.653813 | orchestrator | 2025-02-10 09:19:36 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:36.654732 | orchestrator | 2025-02-10 09:19:36 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:36.656655 | orchestrator | 2025-02-10 09:19:36 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:36.658273 | orchestrator | 2025-02-10 09:19:36 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:36.659776 | orchestrator | 2025-02-10 09:19:36 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:39.729326 | orchestrator | 2025-02-10 09:19:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:39.729483 | orchestrator | 2025-02-10 09:19:39 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:39.730286 | orchestrator | 2025-02-10 09:19:39 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:39.730412 | orchestrator | 2025-02-10 09:19:39 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:39.731302 | orchestrator | 2025-02-10 09:19:39 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:39.734318 | orchestrator | 2025-02-10 09:19:39 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:39.735067 | orchestrator | 2025-02-10 09:19:39 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:39.735174 | orchestrator | 2025-02-10 09:19:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:42.797424 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:42.800404 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:42.802246 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:42.806643 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:42.810123 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:42.815586 | orchestrator | 2025-02-10 09:19:42 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:45.874301 | orchestrator | 2025-02-10 09:19:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:45.874446 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state STARTED 2025-02-10 09:19:45.874524 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:45.877997 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:45.879082 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:45.885418 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:45.885937 | orchestrator | 2025-02-10 09:19:45 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:48.923718 | orchestrator | 2025-02-10 09:19:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:48.923881 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task e5eb74c5-d90a-41cb-a62c-6eb77baeed1a is in state SUCCESS 2025-02-10 09:19:48.925387 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:48.926170 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:48.927011 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:48.927620 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:48.928271 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:19:48.930089 | orchestrator | 2025-02-10 09:19:48 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:51.964063 | orchestrator | 2025-02-10 09:19:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:51.964268 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:51.964351 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:51.964375 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:51.964975 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:51.967491 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:19:51.968424 | orchestrator | 2025-02-10 09:19:51 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:55.001320 | orchestrator | 2025-02-10 09:19:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:55.001480 | orchestrator | 2025-02-10 09:19:54 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:55.002304 | orchestrator | 2025-02-10 09:19:54 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:55.002346 | orchestrator | 2025-02-10 09:19:55 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:55.002975 | orchestrator | 2025-02-10 09:19:55 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:55.003739 | orchestrator | 2025-02-10 09:19:55 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:19:55.004641 | orchestrator | 2025-02-10 09:19:55 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:19:55.004729 | orchestrator | 2025-02-10 09:19:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:19:58.033232 | orchestrator | 2025-02-10 09:19:58 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:19:58.033560 | orchestrator | 2025-02-10 09:19:58 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:19:58.033608 | orchestrator | 2025-02-10 09:19:58 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:19:58.033618 | orchestrator | 2025-02-10 09:19:58 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:19:58.033628 | orchestrator | 2025-02-10 09:19:58 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:19:58.033643 | orchestrator | 2025-02-10 09:19:58 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:20:01.078014 | orchestrator | 2025-02-10 09:19:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:01.078238 | orchestrator | 2025-02-10 09:20:01 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:01.080694 | orchestrator | 2025-02-10 09:20:01 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:01.081212 | orchestrator | 2025-02-10 09:20:01 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:01.081922 | orchestrator | 2025-02-10 09:20:01 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:01.082416 | orchestrator | 2025-02-10 09:20:01 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:01.083145 | orchestrator | 2025-02-10 09:20:01 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state STARTED 2025-02-10 09:20:01.083305 | orchestrator | 2025-02-10 09:20:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:04.116899 | orchestrator | 2025-02-10 09:20:04 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:04.118741 | orchestrator | 2025-02-10 09:20:04 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:04.118841 | orchestrator | 2025-02-10 09:20:04 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:04.119104 | orchestrator | 2025-02-10 09:20:04 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:04.119797 | orchestrator | 2025-02-10 09:20:04 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:04.120332 | orchestrator | 2025-02-10 09:20:04 | INFO  | Task 122b6d74-b6bb-4330-994b-66e41a2b8a3b is in state SUCCESS 2025-02-10 09:20:04.121706 | orchestrator | 2025-02-10 09:20:04.121755 | orchestrator | 2025-02-10 09:20:04.121770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:20:04.121787 | orchestrator | 2025-02-10 09:20:04.121832 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:20:04.121864 | orchestrator | Monday 10 February 2025 09:19:26 +0000 (0:00:00.439) 0:00:00.439 ******* 2025-02-10 09:20:04.121879 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:04.121895 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:04.121909 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:04.121923 | orchestrator | 2025-02-10 09:20:04.121937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:20:04.121951 | orchestrator | Monday 10 February 2025 09:19:26 +0000 (0:00:00.647) 0:00:01.086 ******* 2025-02-10 09:20:04.121966 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-02-10 09:20:04.121980 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-02-10 09:20:04.121994 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-02-10 09:20:04.122008 | orchestrator | 2025-02-10 09:20:04.122068 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-02-10 09:20:04.122085 | orchestrator | 2025-02-10 09:20:04.122099 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-02-10 09:20:04.122113 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:00.407) 0:00:01.493 ******* 2025-02-10 09:20:04.122128 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:20:04.122143 | orchestrator | 2025-02-10 09:20:04.122157 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-02-10 09:20:04.122171 | orchestrator | Monday 10 February 2025 09:19:28 +0000 (0:00:01.111) 0:00:02.605 ******* 2025-02-10 09:20:04.122184 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-10 09:20:04.122199 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-10 09:20:04.122216 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-10 09:20:04.122232 | orchestrator | 2025-02-10 09:20:04.122247 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-02-10 09:20:04.122263 | orchestrator | Monday 10 February 2025 09:19:29 +0000 (0:00:01.627) 0:00:04.232 ******* 2025-02-10 09:20:04.122279 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-10 09:20:04.122295 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-10 09:20:04.122311 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-10 09:20:04.122328 | orchestrator | 2025-02-10 09:20:04.122344 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-02-10 09:20:04.122360 | orchestrator | Monday 10 February 2025 09:19:32 +0000 (0:00:03.042) 0:00:07.275 ******* 2025-02-10 09:20:04.122375 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:04.122398 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:04.122414 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:04.122430 | orchestrator | 2025-02-10 09:20:04.122446 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-02-10 09:20:04.122462 | orchestrator | Monday 10 February 2025 09:19:37 +0000 (0:00:04.109) 0:00:11.384 ******* 2025-02-10 09:20:04.122478 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:04.122494 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:04.122510 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:04.122526 | orchestrator | 2025-02-10 09:20:04.122541 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:20:04.122557 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:04.122599 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:04.122615 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:04.122630 | orchestrator | 2025-02-10 09:20:04.122644 | orchestrator | 2025-02-10 09:20:04.122667 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:20:04.122681 | orchestrator | Monday 10 February 2025 09:19:46 +0000 (0:00:09.208) 0:00:20.593 ******* 2025-02-10 09:20:04.122695 | orchestrator | =============================================================================== 2025-02-10 09:20:04.122709 | orchestrator | memcached : Restart memcached container --------------------------------- 9.21s 2025-02-10 09:20:04.122722 | orchestrator | memcached : Check memcached container ----------------------------------- 4.11s 2025-02-10 09:20:04.122736 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.04s 2025-02-10 09:20:04.122750 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.63s 2025-02-10 09:20:04.122764 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.11s 2025-02-10 09:20:04.122777 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-02-10 09:20:04.122791 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-02-10 09:20:04.122805 | orchestrator | 2025-02-10 09:20:04.122819 | orchestrator | 2025-02-10 09:20:04.122833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:20:04.122846 | orchestrator | 2025-02-10 09:20:04.122860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:20:04.122874 | orchestrator | Monday 10 February 2025 09:19:25 +0000 (0:00:00.452) 0:00:00.452 ******* 2025-02-10 09:20:04.122888 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:04.122903 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:04.122917 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:04.122930 | orchestrator | 2025-02-10 09:20:04.122945 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:20:04.122971 | orchestrator | Monday 10 February 2025 09:19:25 +0000 (0:00:00.586) 0:00:01.038 ******* 2025-02-10 09:20:04.122986 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-02-10 09:20:04.123000 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-02-10 09:20:04.123015 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-02-10 09:20:04.123029 | orchestrator | 2025-02-10 09:20:04.123043 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-02-10 09:20:04.123057 | orchestrator | 2025-02-10 09:20:04.123071 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-02-10 09:20:04.123091 | orchestrator | Monday 10 February 2025 09:19:26 +0000 (0:00:00.766) 0:00:01.805 ******* 2025-02-10 09:20:04.123105 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:20:04.123119 | orchestrator | 2025-02-10 09:20:04.123133 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-02-10 09:20:04.123148 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:01.160) 0:00:02.965 ******* 2025-02-10 09:20:04.123163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123285 | orchestrator | 2025-02-10 09:20:04.123300 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-02-10 09:20:04.123314 | orchestrator | Monday 10 February 2025 09:19:30 +0000 (0:00:02.614) 0:00:05.579 ******* 2025-02-10 09:20:04.123329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123437 | orchestrator | 2025-02-10 09:20:04.123451 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-02-10 09:20:04.123466 | orchestrator | Monday 10 February 2025 09:19:34 +0000 (0:00:03.904) 0:00:09.484 ******* 2025-02-10 09:20:04.123480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123606 | orchestrator | 2025-02-10 09:20:04.123620 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-02-10 09:20:04.123635 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:05.505) 0:00:14.989 ******* 2025-02-10 09:20:04.123649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-10 09:20:04.123743 | orchestrator | 2025-02-10 09:20:04.123764 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-10 09:20:07.153174 | orchestrator | Monday 10 February 2025 09:19:42 +0000 (0:00:03.067) 0:00:18.056 ******* 2025-02-10 09:20:07.153283 | orchestrator | 2025-02-10 09:20:07.153293 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-10 09:20:07.153301 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:00.087) 0:00:18.144 ******* 2025-02-10 09:20:07.153308 | orchestrator | 2025-02-10 09:20:07.153316 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-10 09:20:07.153323 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:00.109) 0:00:18.254 ******* 2025-02-10 09:20:07.153331 | orchestrator | 2025-02-10 09:20:07.153338 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-02-10 09:20:07.153345 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:00.241) 0:00:18.496 ******* 2025-02-10 09:20:07.153378 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:07.153385 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:07.153392 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:07.153399 | orchestrator | 2025-02-10 09:20:07.153405 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-02-10 09:20:07.153412 | orchestrator | Monday 10 February 2025 09:19:53 +0000 (0:00:09.729) 0:00:28.225 ******* 2025-02-10 09:20:07.153418 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:07.153425 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:07.153432 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:07.153454 | orchestrator | 2025-02-10 09:20:07.153461 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:20:07.153468 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:07.153477 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:07.153483 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:20:07.153490 | orchestrator | 2025-02-10 09:20:07.153497 | orchestrator | 2025-02-10 09:20:07.153503 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:20:07.153510 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:09.062) 0:00:37.288 ******* 2025-02-10 09:20:07.153517 | orchestrator | =============================================================================== 2025-02-10 09:20:07.153523 | orchestrator | redis : Restart redis container ----------------------------------------- 9.73s 2025-02-10 09:20:07.153530 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.06s 2025-02-10 09:20:07.153536 | orchestrator | redis : Copying over redis config files --------------------------------- 5.51s 2025-02-10 09:20:07.153543 | orchestrator | redis : Copying over default config.json files -------------------------- 3.90s 2025-02-10 09:20:07.153549 | orchestrator | redis : Check redis containers ------------------------------------------ 3.07s 2025-02-10 09:20:07.153556 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.61s 2025-02-10 09:20:07.153563 | orchestrator | redis : include_tasks --------------------------------------------------- 1.16s 2025-02-10 09:20:07.153596 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-02-10 09:20:07.153610 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2025-02-10 09:20:07.153617 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.44s 2025-02-10 09:20:07.153623 | orchestrator | 2025-02-10 09:20:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:07.153646 | orchestrator | 2025-02-10 09:20:07 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:07.153693 | orchestrator | 2025-02-10 09:20:07 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:07.153705 | orchestrator | 2025-02-10 09:20:07 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:07.154233 | orchestrator | 2025-02-10 09:20:07 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:07.155005 | orchestrator | 2025-02-10 09:20:07 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:10.183385 | orchestrator | 2025-02-10 09:20:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:10.183563 | orchestrator | 2025-02-10 09:20:10 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:10.184985 | orchestrator | 2025-02-10 09:20:10 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:13.229642 | orchestrator | 2025-02-10 09:20:10 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:13.229790 | orchestrator | 2025-02-10 09:20:10 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:13.229813 | orchestrator | 2025-02-10 09:20:10 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:13.229830 | orchestrator | 2025-02-10 09:20:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:13.229865 | orchestrator | 2025-02-10 09:20:13 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:13.229952 | orchestrator | 2025-02-10 09:20:13 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:13.231283 | orchestrator | 2025-02-10 09:20:13 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:13.233792 | orchestrator | 2025-02-10 09:20:13 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:13.234134 | orchestrator | 2025-02-10 09:20:13 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:13.236093 | orchestrator | 2025-02-10 09:20:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:16.273667 | orchestrator | 2025-02-10 09:20:16 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:16.274115 | orchestrator | 2025-02-10 09:20:16 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:16.274161 | orchestrator | 2025-02-10 09:20:16 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:16.275681 | orchestrator | 2025-02-10 09:20:16 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:16.276286 | orchestrator | 2025-02-10 09:20:16 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:19.319176 | orchestrator | 2025-02-10 09:20:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:19.319356 | orchestrator | 2025-02-10 09:20:19 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:19.319572 | orchestrator | 2025-02-10 09:20:19 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:19.319633 | orchestrator | 2025-02-10 09:20:19 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:19.320046 | orchestrator | 2025-02-10 09:20:19 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:19.323797 | orchestrator | 2025-02-10 09:20:19 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:22.368319 | orchestrator | 2025-02-10 09:20:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:22.368467 | orchestrator | 2025-02-10 09:20:22 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:22.368761 | orchestrator | 2025-02-10 09:20:22 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:22.369665 | orchestrator | 2025-02-10 09:20:22 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:22.370611 | orchestrator | 2025-02-10 09:20:22 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:22.371981 | orchestrator | 2025-02-10 09:20:22 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:22.373083 | orchestrator | 2025-02-10 09:20:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:25.451217 | orchestrator | 2025-02-10 09:20:25 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:25.454828 | orchestrator | 2025-02-10 09:20:25 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:25.456084 | orchestrator | 2025-02-10 09:20:25 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:25.457353 | orchestrator | 2025-02-10 09:20:25 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:25.464436 | orchestrator | 2025-02-10 09:20:25 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:28.528137 | orchestrator | 2025-02-10 09:20:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:28.528294 | orchestrator | 2025-02-10 09:20:28 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:28.529199 | orchestrator | 2025-02-10 09:20:28 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:28.529228 | orchestrator | 2025-02-10 09:20:28 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:28.529246 | orchestrator | 2025-02-10 09:20:28 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:28.529267 | orchestrator | 2025-02-10 09:20:28 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:31.582552 | orchestrator | 2025-02-10 09:20:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:31.582776 | orchestrator | 2025-02-10 09:20:31 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:31.585080 | orchestrator | 2025-02-10 09:20:31 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:31.585827 | orchestrator | 2025-02-10 09:20:31 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:31.586841 | orchestrator | 2025-02-10 09:20:31 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:31.587554 | orchestrator | 2025-02-10 09:20:31 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:31.587840 | orchestrator | 2025-02-10 09:20:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:34.632552 | orchestrator | 2025-02-10 09:20:34 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:34.635939 | orchestrator | 2025-02-10 09:20:34 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:34.636671 | orchestrator | 2025-02-10 09:20:34 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:34.639446 | orchestrator | 2025-02-10 09:20:34 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:34.639679 | orchestrator | 2025-02-10 09:20:34 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:34.639738 | orchestrator | 2025-02-10 09:20:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:37.686792 | orchestrator | 2025-02-10 09:20:37 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:37.687008 | orchestrator | 2025-02-10 09:20:37 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:37.687320 | orchestrator | 2025-02-10 09:20:37 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:37.689001 | orchestrator | 2025-02-10 09:20:37 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:37.689385 | orchestrator | 2025-02-10 09:20:37 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:40.760523 | orchestrator | 2025-02-10 09:20:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:40.760762 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:40.762398 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:40.762450 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:43.817133 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:43.817277 | orchestrator | 2025-02-10 09:20:40 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:43.817298 | orchestrator | 2025-02-10 09:20:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:43.817333 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:43.818919 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:43.818959 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:43.819463 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:43.822967 | orchestrator | 2025-02-10 09:20:43 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:46.873921 | orchestrator | 2025-02-10 09:20:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:46.874236 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:46.875448 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:46.877002 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state STARTED 2025-02-10 09:20:46.878115 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:46.879206 | orchestrator | 2025-02-10 09:20:46 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:49.927371 | orchestrator | 2025-02-10 09:20:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:49.927542 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:49.929265 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:49.929308 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task 7e621a20-a8a1-4e9b-aee4-67da1ebac40f is in state SUCCESS 2025-02-10 09:20:49.930485 | orchestrator | 2025-02-10 09:20:49.930516 | orchestrator | 2025-02-10 09:20:49.930530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:20:49.930545 | orchestrator | 2025-02-10 09:20:49.930560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:20:49.930574 | orchestrator | Monday 10 February 2025 09:19:25 +0000 (0:00:00.626) 0:00:00.627 ******* 2025-02-10 09:20:49.930588 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:20:49.930627 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:20:49.930641 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:20:49.930655 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:49.930669 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:49.930683 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:49.930727 | orchestrator | 2025-02-10 09:20:49.930742 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:20:49.930756 | orchestrator | Monday 10 February 2025 09:19:26 +0000 (0:00:01.202) 0:00:01.829 ******* 2025-02-10 09:20:49.930770 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:20:49.930784 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:20:49.930798 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:20:49.930854 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:20:49.930868 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:20:49.930882 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-10 09:20:49.930896 | orchestrator | 2025-02-10 09:20:49.930911 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-02-10 09:20:49.930925 | orchestrator | 2025-02-10 09:20:49.930939 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-02-10 09:20:49.930953 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:01.061) 0:00:02.891 ******* 2025-02-10 09:20:49.930968 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:20:49.930983 | orchestrator | 2025-02-10 09:20:49.930997 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:20:49.931012 | orchestrator | Monday 10 February 2025 09:19:31 +0000 (0:00:03.359) 0:00:06.250 ******* 2025-02-10 09:20:49.931026 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-10 09:20:49.931040 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-10 09:20:49.931054 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-10 09:20:49.931068 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-10 09:20:49.931082 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-10 09:20:49.931098 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-10 09:20:49.931113 | orchestrator | 2025-02-10 09:20:49.931129 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:20:49.931163 | orchestrator | Monday 10 February 2025 09:19:33 +0000 (0:00:02.227) 0:00:08.477 ******* 2025-02-10 09:20:49.931179 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-10 09:20:49.931195 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-10 09:20:49.931210 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-10 09:20:49.931226 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-10 09:20:49.931260 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-10 09:20:49.931275 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-10 09:20:49.931290 | orchestrator | 2025-02-10 09:20:49.931307 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:20:49.931323 | orchestrator | Monday 10 February 2025 09:19:37 +0000 (0:00:04.301) 0:00:12.779 ******* 2025-02-10 09:20:49.931338 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-02-10 09:20:49.931353 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:49.931370 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-02-10 09:20:49.931386 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:49.931402 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-02-10 09:20:49.931418 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:49.931434 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-02-10 09:20:49.931449 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:49.931465 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-02-10 09:20:49.931491 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:49.931505 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-02-10 09:20:49.931519 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:49.931533 | orchestrator | 2025-02-10 09:20:49.931548 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-02-10 09:20:49.931561 | orchestrator | Monday 10 February 2025 09:19:41 +0000 (0:00:04.137) 0:00:16.919 ******* 2025-02-10 09:20:49.931575 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:49.931589 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:49.931622 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:49.931636 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:49.931650 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:49.931664 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:49.931679 | orchestrator | 2025-02-10 09:20:49.931693 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-02-10 09:20:49.931707 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:01.780) 0:00:18.700 ******* 2025-02-10 09:20:49.931737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.931969 | orchestrator | 2025-02-10 09:20:49.931984 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-02-10 09:20:49.931998 | orchestrator | Monday 10 February 2025 09:19:47 +0000 (0:00:03.591) 0:00:22.292 ******* 2025-02-10 09:20:49.932012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932268 | orchestrator | 2025-02-10 09:20:49.932283 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-02-10 09:20:49.932297 | orchestrator | Monday 10 February 2025 09:19:50 +0000 (0:00:02.797) 0:00:25.090 ******* 2025-02-10 09:20:49.932311 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:49.932325 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:49.932339 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:49.932353 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:49.932367 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:49.932381 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:49.932395 | orchestrator | 2025-02-10 09:20:49.932409 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-02-10 09:20:49.932423 | orchestrator | Monday 10 February 2025 09:19:52 +0000 (0:00:02.747) 0:00:27.837 ******* 2025-02-10 09:20:49.932437 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:49.932451 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:49.932465 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:49.932479 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:49.932493 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:49.932507 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:49.932521 | orchestrator | 2025-02-10 09:20:49.932548 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-02-10 09:20:49.932562 | orchestrator | Monday 10 February 2025 09:19:55 +0000 (0:00:02.472) 0:00:30.310 ******* 2025-02-10 09:20:49.932577 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:49.932590 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:49.932639 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:49.932654 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:20:49.932668 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:20:49.932682 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:20:49.932696 | orchestrator | 2025-02-10 09:20:49.932710 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-02-10 09:20:49.932724 | orchestrator | Monday 10 February 2025 09:19:57 +0000 (0:00:02.429) 0:00:32.739 ******* 2025-02-10 09:20:49.932739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.932904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-10 09:20:49.933023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.933056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.933082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-10 09:20:49.933098 | orchestrator | 2025-02-10 09:20:49.933112 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:20:49.933127 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:02.550) 0:00:35.290 ******* 2025-02-10 09:20:49.933141 | orchestrator | 2025-02-10 09:20:49.933155 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:20:49.933169 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:00.253) 0:00:35.543 ******* 2025-02-10 09:20:49.933182 | orchestrator | 2025-02-10 09:20:49.933196 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:20:49.933210 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.703) 0:00:36.247 ******* 2025-02-10 09:20:49.933224 | orchestrator | 2025-02-10 09:20:49.933238 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:20:49.933252 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.172) 0:00:36.419 ******* 2025-02-10 09:20:49.933266 | orchestrator | 2025-02-10 09:20:49.933280 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:20:49.933293 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.294) 0:00:36.714 ******* 2025-02-10 09:20:49.933307 | orchestrator | 2025-02-10 09:20:49.933321 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-10 09:20:49.933335 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.204) 0:00:36.918 ******* 2025-02-10 09:20:49.933348 | orchestrator | 2025-02-10 09:20:49.933362 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-02-10 09:20:49.933376 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:00.493) 0:00:37.411 ******* 2025-02-10 09:20:49.933390 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:49.933404 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:49.933418 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:49.933432 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:49.933446 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:49.933460 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:49.933474 | orchestrator | 2025-02-10 09:20:49.933489 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-02-10 09:20:49.933503 | orchestrator | Monday 10 February 2025 09:20:07 +0000 (0:00:05.383) 0:00:42.795 ******* 2025-02-10 09:20:49.933516 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:20:49.933530 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:20:49.933544 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:20:49.933565 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:20:49.933579 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:20:49.933593 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:20:49.933666 | orchestrator | 2025-02-10 09:20:49.933690 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-10 09:20:49.933706 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:01.689) 0:00:44.484 ******* 2025-02-10 09:20:49.933722 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:49.933737 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:49.933753 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:49.933769 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:49.933795 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:49.933813 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:49.933829 | orchestrator | 2025-02-10 09:20:49.933845 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-02-10 09:20:49.933861 | orchestrator | Monday 10 February 2025 09:20:19 +0000 (0:00:10.087) 0:00:54.572 ******* 2025-02-10 09:20:49.933877 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-02-10 09:20:49.933893 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-02-10 09:20:49.933909 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-02-10 09:20:49.933924 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-02-10 09:20:49.933944 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-02-10 09:20:49.933960 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-02-10 09:20:49.933976 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-02-10 09:20:49.933991 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-02-10 09:20:49.934007 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-02-10 09:20:49.934081 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-02-10 09:20:49.934096 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-02-10 09:20:49.934110 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-02-10 09:20:49.934124 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:20:49.934138 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:20:49.934153 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:20:49.934167 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:20:49.934181 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:20:49.934195 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-10 09:20:49.934207 | orchestrator | 2025-02-10 09:20:49.934220 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-02-10 09:20:49.934232 | orchestrator | Monday 10 February 2025 09:20:28 +0000 (0:00:08.653) 0:01:03.225 ******* 2025-02-10 09:20:49.934244 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-02-10 09:20:49.934264 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:49.934277 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-02-10 09:20:49.934290 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:49.934302 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-02-10 09:20:49.934315 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:49.934327 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-02-10 09:20:49.934340 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-02-10 09:20:49.934353 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-02-10 09:20:49.934365 | orchestrator | 2025-02-10 09:20:49.934377 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-02-10 09:20:49.934390 | orchestrator | Monday 10 February 2025 09:20:32 +0000 (0:00:03.736) 0:01:06.962 ******* 2025-02-10 09:20:49.934402 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-02-10 09:20:49.934415 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:20:49.934427 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-02-10 09:20:49.934439 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:20:49.934452 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-02-10 09:20:49.934465 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:20:49.934478 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-02-10 09:20:49.934491 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-02-10 09:20:49.934503 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-02-10 09:20:49.934515 | orchestrator | 2025-02-10 09:20:49.934528 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-10 09:20:49.934540 | orchestrator | Monday 10 February 2025 09:20:37 +0000 (0:00:05.131) 0:01:12.093 ******* 2025-02-10 09:20:49.934560 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:20:49.938057 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:20:49.938085 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:20:49.938098 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:20:49.938111 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:20:49.938123 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:20:49.938136 | orchestrator | 2025-02-10 09:20:49.938148 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:20:49.938162 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:20:49.938176 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:20:49.938189 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:20:49.938201 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:20:49.938214 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:20:49.938234 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:20:49.938247 | orchestrator | 2025-02-10 09:20:49.938259 | orchestrator | 2025-02-10 09:20:49.938272 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:20:49.938289 | orchestrator | Monday 10 February 2025 09:20:48 +0000 (0:00:10.900) 0:01:22.993 ******* 2025-02-10 09:20:49.938302 | orchestrator | =============================================================================== 2025-02-10 09:20:49.938314 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.99s 2025-02-10 09:20:49.938336 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.65s 2025-02-10 09:20:49.938349 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.38s 2025-02-10 09:20:49.938361 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.13s 2025-02-10 09:20:49.938374 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 4.31s 2025-02-10 09:20:49.938386 | orchestrator | module-load : Drop module persistence ----------------------------------- 4.13s 2025-02-10 09:20:49.938398 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.74s 2025-02-10 09:20:49.938411 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.59s 2025-02-10 09:20:49.938423 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.36s 2025-02-10 09:20:49.938436 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.80s 2025-02-10 09:20:49.938448 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.75s 2025-02-10 09:20:49.938461 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.55s 2025-02-10 09:20:49.938473 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.47s 2025-02-10 09:20:49.938485 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.43s 2025-02-10 09:20:49.938498 | orchestrator | module-load : Load modules ---------------------------------------------- 2.23s 2025-02-10 09:20:49.938510 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.12s 2025-02-10 09:20:49.938522 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.78s 2025-02-10 09:20:49.938535 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.69s 2025-02-10 09:20:49.938547 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.20s 2025-02-10 09:20:49.938559 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2025-02-10 09:20:49.938578 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:52.988124 | orchestrator | 2025-02-10 09:20:49 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:52.988390 | orchestrator | 2025-02-10 09:20:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:52.988438 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:52.989228 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:52.989263 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:52.990064 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:20:52.990950 | orchestrator | 2025-02-10 09:20:52 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:52.992979 | orchestrator | 2025-02-10 09:20:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:56.066538 | orchestrator | 2025-02-10 09:20:56 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:56.082885 | orchestrator | 2025-02-10 09:20:56 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:56.085397 | orchestrator | 2025-02-10 09:20:56 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:56.089287 | orchestrator | 2025-02-10 09:20:56 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:20:59.133082 | orchestrator | 2025-02-10 09:20:56 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:20:59.133244 | orchestrator | 2025-02-10 09:20:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:20:59.133316 | orchestrator | 2025-02-10 09:20:59 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:20:59.133415 | orchestrator | 2025-02-10 09:20:59 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:20:59.133431 | orchestrator | 2025-02-10 09:20:59 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:20:59.134434 | orchestrator | 2025-02-10 09:20:59 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:20:59.135088 | orchestrator | 2025-02-10 09:20:59 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:02.182527 | orchestrator | 2025-02-10 09:20:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:02.182802 | orchestrator | 2025-02-10 09:21:02 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:02.184967 | orchestrator | 2025-02-10 09:21:02 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:02.185293 | orchestrator | 2025-02-10 09:21:02 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:02.189424 | orchestrator | 2025-02-10 09:21:02 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:02.189764 | orchestrator | 2025-02-10 09:21:02 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:05.232002 | orchestrator | 2025-02-10 09:21:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:05.232192 | orchestrator | 2025-02-10 09:21:05 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:05.235850 | orchestrator | 2025-02-10 09:21:05 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:05.237519 | orchestrator | 2025-02-10 09:21:05 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:05.238272 | orchestrator | 2025-02-10 09:21:05 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:05.238836 | orchestrator | 2025-02-10 09:21:05 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:08.288173 | orchestrator | 2025-02-10 09:21:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:08.288341 | orchestrator | 2025-02-10 09:21:08 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:08.291418 | orchestrator | 2025-02-10 09:21:08 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:08.291495 | orchestrator | 2025-02-10 09:21:08 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:08.291544 | orchestrator | 2025-02-10 09:21:08 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:08.293241 | orchestrator | 2025-02-10 09:21:08 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:11.363315 | orchestrator | 2025-02-10 09:21:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:11.363430 | orchestrator | 2025-02-10 09:21:11 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:11.370257 | orchestrator | 2025-02-10 09:21:11 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:11.372295 | orchestrator | 2025-02-10 09:21:11 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:11.373860 | orchestrator | 2025-02-10 09:21:11 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:11.376407 | orchestrator | 2025-02-10 09:21:11 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:11.377003 | orchestrator | 2025-02-10 09:21:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:14.445491 | orchestrator | 2025-02-10 09:21:14 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:14.450823 | orchestrator | 2025-02-10 09:21:14 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:14.451874 | orchestrator | 2025-02-10 09:21:14 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:14.455166 | orchestrator | 2025-02-10 09:21:14 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:14.458394 | orchestrator | 2025-02-10 09:21:14 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:17.506846 | orchestrator | 2025-02-10 09:21:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:17.507017 | orchestrator | 2025-02-10 09:21:17 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:17.507401 | orchestrator | 2025-02-10 09:21:17 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:17.509022 | orchestrator | 2025-02-10 09:21:17 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:17.510155 | orchestrator | 2025-02-10 09:21:17 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:17.511222 | orchestrator | 2025-02-10 09:21:17 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:17.512128 | orchestrator | 2025-02-10 09:21:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:20.561150 | orchestrator | 2025-02-10 09:21:20 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:20.565043 | orchestrator | 2025-02-10 09:21:20 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:20.565094 | orchestrator | 2025-02-10 09:21:20 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:20.565123 | orchestrator | 2025-02-10 09:21:20 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:20.568889 | orchestrator | 2025-02-10 09:21:20 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:23.611584 | orchestrator | 2025-02-10 09:21:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:23.611796 | orchestrator | 2025-02-10 09:21:23 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:23.613434 | orchestrator | 2025-02-10 09:21:23 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:23.613478 | orchestrator | 2025-02-10 09:21:23 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:23.614399 | orchestrator | 2025-02-10 09:21:23 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:23.616604 | orchestrator | 2025-02-10 09:21:23 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:26.673094 | orchestrator | 2025-02-10 09:21:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:26.673257 | orchestrator | 2025-02-10 09:21:26 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:26.674414 | orchestrator | 2025-02-10 09:21:26 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:26.674504 | orchestrator | 2025-02-10 09:21:26 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:26.674531 | orchestrator | 2025-02-10 09:21:26 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:26.674557 | orchestrator | 2025-02-10 09:21:26 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:29.714696 | orchestrator | 2025-02-10 09:21:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:29.714868 | orchestrator | 2025-02-10 09:21:29 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:29.716473 | orchestrator | 2025-02-10 09:21:29 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:29.716936 | orchestrator | 2025-02-10 09:21:29 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:29.717517 | orchestrator | 2025-02-10 09:21:29 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:29.723021 | orchestrator | 2025-02-10 09:21:29 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:32.763783 | orchestrator | 2025-02-10 09:21:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:32.763956 | orchestrator | 2025-02-10 09:21:32 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:32.765758 | orchestrator | 2025-02-10 09:21:32 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:32.768562 | orchestrator | 2025-02-10 09:21:32 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:32.769495 | orchestrator | 2025-02-10 09:21:32 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:32.771322 | orchestrator | 2025-02-10 09:21:32 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:35.817145 | orchestrator | 2025-02-10 09:21:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:35.817383 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:35.818293 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:35.818331 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:35.819387 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:35.821602 | orchestrator | 2025-02-10 09:21:35 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:38.861396 | orchestrator | 2025-02-10 09:21:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:38.861528 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:38.861779 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:38.861800 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:38.861987 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:38.862823 | orchestrator | 2025-02-10 09:21:38 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:41.918625 | orchestrator | 2025-02-10 09:21:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:41.918851 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:41.920224 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:41.920269 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:41.920620 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:41.921710 | orchestrator | 2025-02-10 09:21:41 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:44.974147 | orchestrator | 2025-02-10 09:21:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:44.974312 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:44.974958 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:44.981345 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:44.981436 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:44.982909 | orchestrator | 2025-02-10 09:21:44 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:48.035142 | orchestrator | 2025-02-10 09:21:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:48.035294 | orchestrator | 2025-02-10 09:21:48 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:48.035947 | orchestrator | 2025-02-10 09:21:48 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:48.035983 | orchestrator | 2025-02-10 09:21:48 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:48.036769 | orchestrator | 2025-02-10 09:21:48 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:48.038955 | orchestrator | 2025-02-10 09:21:48 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:51.092149 | orchestrator | 2025-02-10 09:21:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:51.092315 | orchestrator | 2025-02-10 09:21:51 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:51.093618 | orchestrator | 2025-02-10 09:21:51 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:51.093690 | orchestrator | 2025-02-10 09:21:51 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:51.093713 | orchestrator | 2025-02-10 09:21:51 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:51.098812 | orchestrator | 2025-02-10 09:21:51 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:54.134348 | orchestrator | 2025-02-10 09:21:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:54.134573 | orchestrator | 2025-02-10 09:21:54 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:54.134687 | orchestrator | 2025-02-10 09:21:54 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:54.136964 | orchestrator | 2025-02-10 09:21:54 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:54.137070 | orchestrator | 2025-02-10 09:21:54 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:54.137129 | orchestrator | 2025-02-10 09:21:54 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:57.164318 | orchestrator | 2025-02-10 09:21:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:21:57.164459 | orchestrator | 2025-02-10 09:21:57 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:21:57.165224 | orchestrator | 2025-02-10 09:21:57 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:21:57.166688 | orchestrator | 2025-02-10 09:21:57 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:21:57.168914 | orchestrator | 2025-02-10 09:21:57 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:21:57.169847 | orchestrator | 2025-02-10 09:21:57 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:21:57.170094 | orchestrator | 2025-02-10 09:21:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:00.231275 | orchestrator | 2025-02-10 09:22:00 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:00.233843 | orchestrator | 2025-02-10 09:22:00 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:00.240368 | orchestrator | 2025-02-10 09:22:00 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:00.247055 | orchestrator | 2025-02-10 09:22:00 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:00.248323 | orchestrator | 2025-02-10 09:22:00 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:22:00.248680 | orchestrator | 2025-02-10 09:22:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:03.305702 | orchestrator | 2025-02-10 09:22:03 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:03.305959 | orchestrator | 2025-02-10 09:22:03 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:03.305992 | orchestrator | 2025-02-10 09:22:03 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:03.306759 | orchestrator | 2025-02-10 09:22:03 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:03.307549 | orchestrator | 2025-02-10 09:22:03 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:22:06.360780 | orchestrator | 2025-02-10 09:22:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:06.360907 | orchestrator | 2025-02-10 09:22:06 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:06.361926 | orchestrator | 2025-02-10 09:22:06 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:06.363759 | orchestrator | 2025-02-10 09:22:06 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:06.366151 | orchestrator | 2025-02-10 09:22:06 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:06.370671 | orchestrator | 2025-02-10 09:22:06 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:22:09.413535 | orchestrator | 2025-02-10 09:22:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:09.413782 | orchestrator | 2025-02-10 09:22:09 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:09.414515 | orchestrator | 2025-02-10 09:22:09 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:09.415312 | orchestrator | 2025-02-10 09:22:09 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:09.415392 | orchestrator | 2025-02-10 09:22:09 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:09.416394 | orchestrator | 2025-02-10 09:22:09 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:22:12.482213 | orchestrator | 2025-02-10 09:22:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:12.482378 | orchestrator | 2025-02-10 09:22:12 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:12.482540 | orchestrator | 2025-02-10 09:22:12 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:12.482568 | orchestrator | 2025-02-10 09:22:12 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:12.482814 | orchestrator | 2025-02-10 09:22:12 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:12.483563 | orchestrator | 2025-02-10 09:22:12 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:22:12.483841 | orchestrator | 2025-02-10 09:22:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:15.540175 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:15.541498 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:15.542279 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:15.543333 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:15.550439 | orchestrator | 2025-02-10 09:22:15 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state STARTED 2025-02-10 09:22:18.608176 | orchestrator | 2025-02-10 09:22:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:18.608346 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:18.608445 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:18.608469 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:18.616719 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:18.622356 | orchestrator | 2025-02-10 09:22:18.622413 | orchestrator | 2025-02-10 09:22:18.622429 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-02-10 09:22:18.622444 | orchestrator | 2025-02-10 09:22:18.622459 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-10 09:22:18.622474 | orchestrator | Monday 10 February 2025 09:19:51 +0000 (0:00:00.497) 0:00:00.497 ******* 2025-02-10 09:22:18.622489 | orchestrator | ok: [localhost] => { 2025-02-10 09:22:18.622505 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-02-10 09:22:18.622519 | orchestrator | } 2025-02-10 09:22:18.622534 | orchestrator | 2025-02-10 09:22:18.622548 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-02-10 09:22:18.622563 | orchestrator | Monday 10 February 2025 09:19:51 +0000 (0:00:00.079) 0:00:00.576 ******* 2025-02-10 09:22:18.622578 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-02-10 09:22:18.622594 | orchestrator | ...ignoring 2025-02-10 09:22:18.622608 | orchestrator | 2025-02-10 09:22:18.622696 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-02-10 09:22:18.622712 | orchestrator | Monday 10 February 2025 09:19:54 +0000 (0:00:02.689) 0:00:03.266 ******* 2025-02-10 09:22:18.622726 | orchestrator | skipping: [localhost] 2025-02-10 09:22:18.622741 | orchestrator | 2025-02-10 09:22:18.622755 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-02-10 09:22:18.622769 | orchestrator | Monday 10 February 2025 09:19:54 +0000 (0:00:00.064) 0:00:03.331 ******* 2025-02-10 09:22:18.622783 | orchestrator | ok: [localhost] 2025-02-10 09:22:18.622797 | orchestrator | 2025-02-10 09:22:18.622811 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:22:18.622824 | orchestrator | 2025-02-10 09:22:18.622857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:22:18.622872 | orchestrator | Monday 10 February 2025 09:19:54 +0000 (0:00:00.293) 0:00:03.624 ******* 2025-02-10 09:22:18.622886 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:18.622900 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:18.622914 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:18.622928 | orchestrator | 2025-02-10 09:22:18.622942 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:22:18.622956 | orchestrator | Monday 10 February 2025 09:19:55 +0000 (0:00:00.815) 0:00:04.444 ******* 2025-02-10 09:22:18.622970 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-02-10 09:22:18.622984 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-02-10 09:22:18.622998 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-02-10 09:22:18.623012 | orchestrator | 2025-02-10 09:22:18.623026 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-02-10 09:22:18.623040 | orchestrator | 2025-02-10 09:22:18.623054 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-10 09:22:18.623068 | orchestrator | Monday 10 February 2025 09:19:56 +0000 (0:00:00.735) 0:00:05.179 ******* 2025-02-10 09:22:18.623082 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:22:18.623097 | orchestrator | 2025-02-10 09:22:18.623111 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-10 09:22:18.623131 | orchestrator | Monday 10 February 2025 09:19:57 +0000 (0:00:01.038) 0:00:06.218 ******* 2025-02-10 09:22:18.623154 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:18.623175 | orchestrator | 2025-02-10 09:22:18.623197 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-02-10 09:22:18.623219 | orchestrator | Monday 10 February 2025 09:19:58 +0000 (0:00:01.086) 0:00:07.305 ******* 2025-02-10 09:22:18.623240 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.623264 | orchestrator | 2025-02-10 09:22:18.623286 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-02-10 09:22:18.623305 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:00.843) 0:00:08.148 ******* 2025-02-10 09:22:18.623326 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.623349 | orchestrator | 2025-02-10 09:22:18.623371 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-02-10 09:22:18.623390 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:00.364) 0:00:08.512 ******* 2025-02-10 09:22:18.623411 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.623432 | orchestrator | 2025-02-10 09:22:18.623454 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-02-10 09:22:18.623476 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:00.353) 0:00:08.865 ******* 2025-02-10 09:22:18.623497 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.623518 | orchestrator | 2025-02-10 09:22:18.623540 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-10 09:22:18.623564 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:00.440) 0:00:09.306 ******* 2025-02-10 09:22:18.623587 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:22:18.623625 | orchestrator | 2025-02-10 09:22:18.623810 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-10 09:22:18.623846 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:01.737) 0:00:11.043 ******* 2025-02-10 09:22:18.623868 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:18.623891 | orchestrator | 2025-02-10 09:22:18.623915 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-02-10 09:22:18.623939 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:01.251) 0:00:12.295 ******* 2025-02-10 09:22:18.623960 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.623982 | orchestrator | 2025-02-10 09:22:18.624003 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-02-10 09:22:18.624026 | orchestrator | Monday 10 February 2025 09:20:04 +0000 (0:00:01.491) 0:00:13.786 ******* 2025-02-10 09:22:18.624049 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.624071 | orchestrator | 2025-02-10 09:22:18.624114 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-02-10 09:22:18.624137 | orchestrator | Monday 10 February 2025 09:20:05 +0000 (0:00:00.919) 0:00:14.706 ******* 2025-02-10 09:22:18.624158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.624177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.624193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.624221 | orchestrator | 2025-02-10 09:22:18.624235 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-02-10 09:22:18.624249 | orchestrator | Monday 10 February 2025 09:20:07 +0000 (0:00:01.389) 0:00:16.095 ******* 2025-02-10 09:22:18.624274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.624296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.624320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.624344 | orchestrator | 2025-02-10 09:22:18.624363 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-02-10 09:22:18.624394 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:01.726) 0:00:17.822 ******* 2025-02-10 09:22:18.624415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-10 09:22:18.624437 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-10 09:22:18.624460 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-10 09:22:18.624483 | orchestrator | 2025-02-10 09:22:18.624514 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-02-10 09:22:18.624537 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:04.737) 0:00:22.560 ******* 2025-02-10 09:22:18.624560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-10 09:22:18.624582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-10 09:22:18.624597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-10 09:22:18.624611 | orchestrator | 2025-02-10 09:22:18.624625 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-02-10 09:22:18.624639 | orchestrator | Monday 10 February 2025 09:20:16 +0000 (0:00:02.966) 0:00:25.527 ******* 2025-02-10 09:22:18.624691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-10 09:22:18.624708 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-10 09:22:18.624722 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-10 09:22:18.624736 | orchestrator | 2025-02-10 09:22:18.624750 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-02-10 09:22:18.624774 | orchestrator | Monday 10 February 2025 09:20:18 +0000 (0:00:01.854) 0:00:27.381 ******* 2025-02-10 09:22:18.624789 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-10 09:22:18.624803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-10 09:22:18.624817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-10 09:22:18.624831 | orchestrator | 2025-02-10 09:22:18.624845 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-02-10 09:22:18.624859 | orchestrator | Monday 10 February 2025 09:20:20 +0000 (0:00:02.134) 0:00:29.516 ******* 2025-02-10 09:22:18.624873 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-10 09:22:18.624894 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-10 09:22:18.624909 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-10 09:22:18.624923 | orchestrator | 2025-02-10 09:22:18.624937 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-02-10 09:22:18.624951 | orchestrator | Monday 10 February 2025 09:20:22 +0000 (0:00:01.851) 0:00:31.368 ******* 2025-02-10 09:22:18.624965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-10 09:22:18.624979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-10 09:22:18.624993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-10 09:22:18.625007 | orchestrator | 2025-02-10 09:22:18.625021 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-10 09:22:18.625035 | orchestrator | Monday 10 February 2025 09:20:25 +0000 (0:00:02.560) 0:00:33.928 ******* 2025-02-10 09:22:18.625048 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.625062 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:18.625087 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:18.625103 | orchestrator | 2025-02-10 09:22:18.625118 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-02-10 09:22:18.625134 | orchestrator | Monday 10 February 2025 09:20:26 +0000 (0:00:00.989) 0:00:34.917 ******* 2025-02-10 09:22:18.625151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.625169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.625197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:22:18.625214 | orchestrator | 2025-02-10 09:22:18.625230 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-02-10 09:22:18.625245 | orchestrator | Monday 10 February 2025 09:20:28 +0000 (0:00:01.982) 0:00:36.900 ******* 2025-02-10 09:22:18.625261 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:18.625276 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:18.625292 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:18.625307 | orchestrator | 2025-02-10 09:22:18.625328 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-02-10 09:22:18.625342 | orchestrator | Monday 10 February 2025 09:20:29 +0000 (0:00:01.875) 0:00:38.775 ******* 2025-02-10 09:22:18.625356 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:18.625370 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:18.625384 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:18.625398 | orchestrator | 2025-02-10 09:22:18.625412 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-02-10 09:22:18.625426 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:04.380) 0:00:43.156 ******* 2025-02-10 09:22:18.625439 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:18.625453 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:18.625467 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:18.625481 | orchestrator | 2025-02-10 09:22:18.625495 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-10 09:22:18.625509 | orchestrator | 2025-02-10 09:22:18.625523 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-10 09:22:18.625536 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:00.480) 0:00:43.636 ******* 2025-02-10 09:22:18.625550 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:18.625564 | orchestrator | 2025-02-10 09:22:18.625578 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-10 09:22:18.625592 | orchestrator | Monday 10 February 2025 09:20:35 +0000 (0:00:00.847) 0:00:44.484 ******* 2025-02-10 09:22:18.625605 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:18.625619 | orchestrator | 2025-02-10 09:22:18.625633 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-10 09:22:18.625647 | orchestrator | Monday 10 February 2025 09:20:36 +0000 (0:00:00.588) 0:00:45.072 ******* 2025-02-10 09:22:18.625727 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:18.625742 | orchestrator | 2025-02-10 09:22:18.625756 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-10 09:22:18.625770 | orchestrator | Monday 10 February 2025 09:20:39 +0000 (0:00:02.827) 0:00:47.901 ******* 2025-02-10 09:22:18.625784 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:18.625797 | orchestrator | 2025-02-10 09:22:18.625817 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-10 09:22:18.625832 | orchestrator | 2025-02-10 09:22:18.625846 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-10 09:22:18.625859 | orchestrator | Monday 10 February 2025 09:21:33 +0000 (0:00:54.757) 0:01:42.658 ******* 2025-02-10 09:22:18.625873 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:18.625887 | orchestrator | 2025-02-10 09:22:18.625901 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-10 09:22:18.625915 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.506) 0:01:43.165 ******* 2025-02-10 09:22:18.625929 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:18.625942 | orchestrator | 2025-02-10 09:22:18.625956 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-10 09:22:18.625970 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.207) 0:01:43.373 ******* 2025-02-10 09:22:18.625984 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:18.625998 | orchestrator | 2025-02-10 09:22:18.626049 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-10 09:22:18.626066 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:02.082) 0:01:45.456 ******* 2025-02-10 09:22:18.626080 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:18.626094 | orchestrator | 2025-02-10 09:22:18.626108 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-10 09:22:18.626122 | orchestrator | 2025-02-10 09:22:18.626136 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-10 09:22:18.626149 | orchestrator | Monday 10 February 2025 09:21:51 +0000 (0:00:15.031) 0:02:00.487 ******* 2025-02-10 09:22:18.626163 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:18.626184 | orchestrator | 2025-02-10 09:22:18.626198 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-10 09:22:18.626212 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:00.742) 0:02:01.230 ******* 2025-02-10 09:22:18.626226 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:18.626240 | orchestrator | 2025-02-10 09:22:18.626254 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-10 09:22:18.626266 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:00.442) 0:02:01.672 ******* 2025-02-10 09:22:18.626278 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:18.626296 | orchestrator | 2025-02-10 09:22:18.626317 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-10 09:22:18.626330 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:02.332) 0:02:04.004 ******* 2025-02-10 09:22:18.626342 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:18.626355 | orchestrator | 2025-02-10 09:22:18.626367 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-02-10 09:22:18.626379 | orchestrator | 2025-02-10 09:22:18.626392 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-02-10 09:22:18.626404 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:16.567) 0:02:20.572 ******* 2025-02-10 09:22:18.626416 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:22:18.626429 | orchestrator | 2025-02-10 09:22:18.626441 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-02-10 09:22:18.626453 | orchestrator | Monday 10 February 2025 09:22:13 +0000 (0:00:01.839) 0:02:22.412 ******* 2025-02-10 09:22:18.626466 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:22:18.626478 | orchestrator | enable_outward_rabbitmq_True 2025-02-10 09:22:18.626494 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:22:18.626515 | orchestrator | outward_rabbitmq_restart 2025-02-10 09:22:18.626537 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:18.626557 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:18.626577 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:18.626596 | orchestrator | 2025-02-10 09:22:18.626616 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-02-10 09:22:18.626636 | orchestrator | skipping: no hosts matched 2025-02-10 09:22:18.626682 | orchestrator | 2025-02-10 09:22:18.626704 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-02-10 09:22:18.626723 | orchestrator | skipping: no hosts matched 2025-02-10 09:22:18.626743 | orchestrator | 2025-02-10 09:22:18.626762 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-02-10 09:22:18.626784 | orchestrator | skipping: no hosts matched 2025-02-10 09:22:18.626803 | orchestrator | 2025-02-10 09:22:18.626825 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:22:18.626839 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-10 09:22:18.626853 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-10 09:22:18.626866 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:22:18.626880 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:22:18.626892 | orchestrator | 2025-02-10 09:22:18.626904 | orchestrator | 2025-02-10 09:22:18.626917 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:22:18.626929 | orchestrator | Monday 10 February 2025 09:22:17 +0000 (0:00:03.778) 0:02:26.191 ******* 2025-02-10 09:22:18.626941 | orchestrator | =============================================================================== 2025-02-10 09:22:18.626962 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.36s 2025-02-10 09:22:18.626981 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 7.24s 2025-02-10 09:22:18.626994 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 4.74s 2025-02-10 09:22:18.627006 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 4.38s 2025-02-10 09:22:18.627018 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.78s 2025-02-10 09:22:18.627031 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.97s 2025-02-10 09:22:18.627043 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.69s 2025-02-10 09:22:18.627056 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.56s 2025-02-10 09:22:18.627068 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.13s 2025-02-10 09:22:18.627080 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.10s 2025-02-10 09:22:18.627092 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.98s 2025-02-10 09:22:18.627104 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.88s 2025-02-10 09:22:18.627116 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.85s 2025-02-10 09:22:18.627129 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.85s 2025-02-10 09:22:18.627141 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.84s 2025-02-10 09:22:18.627153 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.74s 2025-02-10 09:22:18.627165 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.73s 2025-02-10 09:22:18.627178 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.49s 2025-02-10 09:22:18.627190 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.39s 2025-02-10 09:22:18.627202 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.25s 2025-02-10 09:22:18.627215 | orchestrator | 2025-02-10 09:22:18 | INFO  | Task 378ce63d-1279-43f0-a513-b8e4717ecff7 is in state SUCCESS 2025-02-10 09:22:18.627235 | orchestrator | 2025-02-10 09:22:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:21.672072 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:21.674217 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:21.674298 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:21.674932 | orchestrator | 2025-02-10 09:22:21 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:24.706429 | orchestrator | 2025-02-10 09:22:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:24.706623 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:24.706835 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state STARTED 2025-02-10 09:22:24.706866 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:24.707331 | orchestrator | 2025-02-10 09:22:24 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:24.707471 | orchestrator | 2025-02-10 09:22:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:27.744634 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task d7e8dba8-07a4-412e-a55c-40f7d714903e is in state STARTED 2025-02-10 09:22:27.744879 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:27.744909 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task b996fa9c-021e-4b5e-ada1-4599a5118fa3 is in state STARTED 2025-02-10 09:22:27.746328 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task 9191870d-99c8-44d6-affe-faca23c4c1fe is in state SUCCESS 2025-02-10 09:22:27.749112 | orchestrator | 2025-02-10 09:22:27.749202 | orchestrator | 2025-02-10 09:22:27.749222 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-02-10 09:22:27.749238 | orchestrator | 2025-02-10 09:22:27.749253 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-02-10 09:22:27.749267 | orchestrator | Monday 10 February 2025 09:18:01 +0000 (0:00:00.392) 0:00:00.392 ******* 2025-02-10 09:22:27.749282 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:27.749297 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:27.749311 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:27.749325 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.749339 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.749353 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.749366 | orchestrator | 2025-02-10 09:22:27.749381 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-02-10 09:22:27.749395 | orchestrator | Monday 10 February 2025 09:18:03 +0000 (0:00:01.927) 0:00:02.320 ******* 2025-02-10 09:22:27.749409 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.749424 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.749438 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.749452 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.749465 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.749480 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.749494 | orchestrator | 2025-02-10 09:22:27.749508 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-02-10 09:22:27.749522 | orchestrator | Monday 10 February 2025 09:18:05 +0000 (0:00:01.828) 0:00:04.149 ******* 2025-02-10 09:22:27.749536 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.749550 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.749564 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.749578 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.749591 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.749605 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.749619 | orchestrator | 2025-02-10 09:22:27.749633 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-02-10 09:22:27.749647 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:01.056) 0:00:05.205 ******* 2025-02-10 09:22:27.749686 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:27.749703 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.749719 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:27.749735 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.749751 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.749766 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:27.749782 | orchestrator | 2025-02-10 09:22:27.749799 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-02-10 09:22:27.749815 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:02.207) 0:00:07.412 ******* 2025-02-10 09:22:27.749830 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:27.749846 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:27.749862 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.749878 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.749893 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.749909 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:27.749924 | orchestrator | 2025-02-10 09:22:27.749941 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-02-10 09:22:27.749956 | orchestrator | Monday 10 February 2025 09:18:10 +0000 (0:00:01.665) 0:00:09.078 ******* 2025-02-10 09:22:27.749970 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:27.750008 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:27.750082 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:27.750100 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.750114 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.750128 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.750141 | orchestrator | 2025-02-10 09:22:27.750172 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-02-10 09:22:27.750187 | orchestrator | Monday 10 February 2025 09:18:11 +0000 (0:00:01.436) 0:00:10.514 ******* 2025-02-10 09:22:27.750202 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.750216 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.750230 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.750244 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.750258 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.750272 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.750292 | orchestrator | 2025-02-10 09:22:27.750306 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-02-10 09:22:27.750321 | orchestrator | Monday 10 February 2025 09:18:13 +0000 (0:00:01.386) 0:00:11.900 ******* 2025-02-10 09:22:27.750335 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.750349 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.750363 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.750378 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.750392 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.750407 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.750421 | orchestrator | 2025-02-10 09:22:27.750436 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-02-10 09:22:27.750450 | orchestrator | Monday 10 February 2025 09:18:14 +0000 (0:00:01.584) 0:00:13.485 ******* 2025-02-10 09:22:27.750464 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:22:27.750478 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:22:27.750492 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.750507 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:22:27.750521 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:22:27.750535 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.750549 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:22:27.750563 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:22:27.750577 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.750591 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:22:27.750619 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:22:27.750634 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.750648 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:22:27.750695 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:22:27.750710 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.750724 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:22:27.750738 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:22:27.750752 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.750766 | orchestrator | 2025-02-10 09:22:27.750780 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-02-10 09:22:27.750794 | orchestrator | Monday 10 February 2025 09:18:15 +0000 (0:00:00.780) 0:00:14.265 ******* 2025-02-10 09:22:27.750808 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.750822 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.750835 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.750859 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.750874 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.750888 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.750902 | orchestrator | 2025-02-10 09:22:27.750916 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-02-10 09:22:27.750932 | orchestrator | Monday 10 February 2025 09:18:16 +0000 (0:00:01.161) 0:00:15.427 ******* 2025-02-10 09:22:27.750946 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:27.750960 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:27.750974 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:27.750988 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.751002 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.751016 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.751030 | orchestrator | 2025-02-10 09:22:27.751044 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-02-10 09:22:27.751059 | orchestrator | Monday 10 February 2025 09:18:18 +0000 (0:00:01.490) 0:00:16.917 ******* 2025-02-10 09:22:27.751073 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:27.751087 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.751101 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:27.751115 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:27.751129 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.751143 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.751157 | orchestrator | 2025-02-10 09:22:27.751172 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-02-10 09:22:27.751186 | orchestrator | Monday 10 February 2025 09:18:23 +0000 (0:00:05.629) 0:00:22.547 ******* 2025-02-10 09:22:27.751199 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.751213 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.751227 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.751241 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.751255 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.751268 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.751282 | orchestrator | 2025-02-10 09:22:27.751296 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-02-10 09:22:27.751310 | orchestrator | Monday 10 February 2025 09:18:25 +0000 (0:00:01.420) 0:00:23.967 ******* 2025-02-10 09:22:27.751324 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.751338 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.751352 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.751366 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.751379 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.751393 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.751407 | orchestrator | 2025-02-10 09:22:27.751422 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-02-10 09:22:27.751437 | orchestrator | Monday 10 February 2025 09:18:27 +0000 (0:00:01.794) 0:00:25.762 ******* 2025-02-10 09:22:27.751451 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.751465 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.751479 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.751493 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.751507 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.751521 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.751535 | orchestrator | 2025-02-10 09:22:27.751549 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-02-10 09:22:27.751563 | orchestrator | Monday 10 February 2025 09:18:27 +0000 (0:00:00.597) 0:00:26.359 ******* 2025-02-10 09:22:27.751577 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-02-10 09:22:27.751592 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-02-10 09:22:27.751606 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.751620 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-02-10 09:22:27.751642 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-02-10 09:22:27.751714 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.751732 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-02-10 09:22:27.751746 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-02-10 09:22:27.751761 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.751782 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-02-10 09:22:27.751796 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-02-10 09:22:27.751810 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.751824 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-02-10 09:22:27.751838 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-02-10 09:22:27.751853 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.751867 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-02-10 09:22:27.751881 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-02-10 09:22:27.751895 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.751909 | orchestrator | 2025-02-10 09:22:27.751923 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-02-10 09:22:27.751950 | orchestrator | Monday 10 February 2025 09:18:29 +0000 (0:00:01.311) 0:00:27.670 ******* 2025-02-10 09:22:27.751964 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.751977 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.751989 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.752016 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.752029 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752052 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.752065 | orchestrator | 2025-02-10 09:22:27.752077 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-02-10 09:22:27.752090 | orchestrator | 2025-02-10 09:22:27.752104 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-02-10 09:22:27.752117 | orchestrator | Monday 10 February 2025 09:18:30 +0000 (0:00:01.485) 0:00:29.156 ******* 2025-02-10 09:22:27.752129 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.752142 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.752155 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.752168 | orchestrator | 2025-02-10 09:22:27.752181 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-02-10 09:22:27.752193 | orchestrator | Monday 10 February 2025 09:18:31 +0000 (0:00:01.052) 0:00:30.209 ******* 2025-02-10 09:22:27.752206 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.752219 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.752231 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.752244 | orchestrator | 2025-02-10 09:22:27.752257 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-02-10 09:22:27.752269 | orchestrator | Monday 10 February 2025 09:18:32 +0000 (0:00:01.312) 0:00:31.521 ******* 2025-02-10 09:22:27.752282 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.752295 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.752307 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.752320 | orchestrator | 2025-02-10 09:22:27.752333 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-02-10 09:22:27.752346 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:01.012) 0:00:32.534 ******* 2025-02-10 09:22:27.752359 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.752371 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.752383 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.752396 | orchestrator | 2025-02-10 09:22:27.752409 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-02-10 09:22:27.752421 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:00.553) 0:00:33.087 ******* 2025-02-10 09:22:27.752434 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.752447 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752467 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.752480 | orchestrator | 2025-02-10 09:22:27.752493 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-02-10 09:22:27.752506 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:00.357) 0:00:33.445 ******* 2025-02-10 09:22:27.752518 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:22:27.752531 | orchestrator | 2025-02-10 09:22:27.752543 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-02-10 09:22:27.752556 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:00.647) 0:00:34.093 ******* 2025-02-10 09:22:27.752569 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.752581 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.752594 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.752607 | orchestrator | 2025-02-10 09:22:27.752619 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-02-10 09:22:27.752632 | orchestrator | Monday 10 February 2025 09:18:37 +0000 (0:00:01.692) 0:00:35.786 ******* 2025-02-10 09:22:27.752644 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752673 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.752688 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.752700 | orchestrator | 2025-02-10 09:22:27.752713 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-02-10 09:22:27.752726 | orchestrator | Monday 10 February 2025 09:18:37 +0000 (0:00:00.664) 0:00:36.450 ******* 2025-02-10 09:22:27.752739 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752752 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.752764 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.752777 | orchestrator | 2025-02-10 09:22:27.752789 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-02-10 09:22:27.752802 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.771) 0:00:37.222 ******* 2025-02-10 09:22:27.752815 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752827 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.752840 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.752853 | orchestrator | 2025-02-10 09:22:27.752865 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-02-10 09:22:27.752878 | orchestrator | Monday 10 February 2025 09:18:40 +0000 (0:00:02.204) 0:00:39.426 ******* 2025-02-10 09:22:27.752890 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.752902 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752915 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.752927 | orchestrator | 2025-02-10 09:22:27.752940 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-02-10 09:22:27.752953 | orchestrator | Monday 10 February 2025 09:18:41 +0000 (0:00:00.444) 0:00:39.871 ******* 2025-02-10 09:22:27.752965 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.752978 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.752990 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.753002 | orchestrator | 2025-02-10 09:22:27.753015 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-02-10 09:22:27.753027 | orchestrator | Monday 10 February 2025 09:18:41 +0000 (0:00:00.347) 0:00:40.219 ******* 2025-02-10 09:22:27.753040 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.753053 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.753065 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.753077 | orchestrator | 2025-02-10 09:22:27.753090 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-02-10 09:22:27.753104 | orchestrator | Monday 10 February 2025 09:18:42 +0000 (0:00:01.277) 0:00:41.496 ******* 2025-02-10 09:22:27.753123 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-10 09:22:27.753138 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-10 09:22:27.753158 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-10 09:22:27.753171 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-10 09:22:27.753184 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-10 09:22:27.753197 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-10 09:22:27.753210 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-10 09:22:27.753222 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-10 09:22:27.753235 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-10 09:22:27.753247 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-10 09:22:27.753267 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-10 09:22:27.753281 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-10 09:22:27.753294 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-02-10 09:22:27.753307 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-02-10 09:22:27.753320 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-02-10 09:22:27.753333 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.753346 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.753381 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.753394 | orchestrator | 2025-02-10 09:22:27.753407 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-02-10 09:22:27.753420 | orchestrator | Monday 10 February 2025 09:19:38 +0000 (0:00:55.744) 0:01:37.241 ******* 2025-02-10 09:22:27.753433 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.753445 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.753458 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.753472 | orchestrator | 2025-02-10 09:22:27.753484 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-02-10 09:22:27.753503 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:00.536) 0:01:37.777 ******* 2025-02-10 09:22:27.753516 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.753536 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.753550 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.753563 | orchestrator | 2025-02-10 09:22:27.753576 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-02-10 09:22:27.753588 | orchestrator | Monday 10 February 2025 09:19:40 +0000 (0:00:01.287) 0:01:39.064 ******* 2025-02-10 09:22:27.753601 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.753614 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.753626 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.753639 | orchestrator | 2025-02-10 09:22:27.753651 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-02-10 09:22:27.753678 | orchestrator | Monday 10 February 2025 09:19:42 +0000 (0:00:01.634) 0:01:40.699 ******* 2025-02-10 09:22:27.753708 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.753721 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.753734 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.753747 | orchestrator | 2025-02-10 09:22:27.753760 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-02-10 09:22:27.753772 | orchestrator | Monday 10 February 2025 09:19:56 +0000 (0:00:14.713) 0:01:55.413 ******* 2025-02-10 09:22:27.753785 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.753798 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.753810 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.753823 | orchestrator | 2025-02-10 09:22:27.753836 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-02-10 09:22:27.753848 | orchestrator | Monday 10 February 2025 09:19:57 +0000 (0:00:00.747) 0:01:56.161 ******* 2025-02-10 09:22:27.753861 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.753873 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.753886 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.753898 | orchestrator | 2025-02-10 09:22:27.753912 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-02-10 09:22:27.753924 | orchestrator | Monday 10 February 2025 09:19:58 +0000 (0:00:00.653) 0:01:56.814 ******* 2025-02-10 09:22:27.753937 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.753950 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.753963 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.753975 | orchestrator | 2025-02-10 09:22:27.753996 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-02-10 09:22:27.754009 | orchestrator | Monday 10 February 2025 09:19:58 +0000 (0:00:00.534) 0:01:57.349 ******* 2025-02-10 09:22:27.754058 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.754078 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.754099 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.754120 | orchestrator | 2025-02-10 09:22:27.754141 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-02-10 09:22:27.754158 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:00.780) 0:01:58.129 ******* 2025-02-10 09:22:27.754172 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.754184 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.754196 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.754209 | orchestrator | 2025-02-10 09:22:27.754222 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-02-10 09:22:27.754234 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:00.274) 0:01:58.404 ******* 2025-02-10 09:22:27.754247 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.754260 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.754272 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.754284 | orchestrator | 2025-02-10 09:22:27.754296 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-02-10 09:22:27.754309 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:00.646) 0:01:59.050 ******* 2025-02-10 09:22:27.754321 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.754334 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.754346 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.754359 | orchestrator | 2025-02-10 09:22:27.754372 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-02-10 09:22:27.754384 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.713) 0:01:59.764 ******* 2025-02-10 09:22:27.754396 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.754409 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.754422 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.754434 | orchestrator | 2025-02-10 09:22:27.754447 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-02-10 09:22:27.754460 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:01.150) 0:02:00.914 ******* 2025-02-10 09:22:27.754473 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:22:27.754485 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:22:27.754507 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:22:27.754520 | orchestrator | 2025-02-10 09:22:27.754533 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-02-10 09:22:27.754546 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:00.827) 0:02:01.741 ******* 2025-02-10 09:22:27.754558 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.754570 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.754584 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.754596 | orchestrator | 2025-02-10 09:22:27.754609 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-02-10 09:22:27.754621 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:00.374) 0:02:02.115 ******* 2025-02-10 09:22:27.754634 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.754646 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.754722 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.754795 | orchestrator | 2025-02-10 09:22:27.754812 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-02-10 09:22:27.754841 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:00.325) 0:02:02.441 ******* 2025-02-10 09:22:27.754853 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.754866 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.754879 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.754892 | orchestrator | 2025-02-10 09:22:27.754904 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-02-10 09:22:27.754917 | orchestrator | Monday 10 February 2025 09:20:04 +0000 (0:00:00.921) 0:02:03.362 ******* 2025-02-10 09:22:27.754929 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.754941 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.754953 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.754976 | orchestrator | 2025-02-10 09:22:27.754989 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-02-10 09:22:27.755002 | orchestrator | Monday 10 February 2025 09:20:05 +0000 (0:00:00.800) 0:02:04.163 ******* 2025-02-10 09:22:27.755020 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-10 09:22:27.755033 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-10 09:22:27.755046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-10 09:22:27.755058 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-10 09:22:27.755071 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-10 09:22:27.755084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-10 09:22:27.755097 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-10 09:22:27.755110 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-10 09:22:27.755122 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-10 09:22:27.755135 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-10 09:22:27.755147 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-02-10 09:22:27.755160 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-10 09:22:27.755183 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-10 09:22:27.755196 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-02-10 09:22:27.755207 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-10 09:22:27.755227 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-10 09:22:27.755237 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-10 09:22:27.755247 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-10 09:22:27.755258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-10 09:22:27.755268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-10 09:22:27.755279 | orchestrator | 2025-02-10 09:22:27.755290 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-02-10 09:22:27.755300 | orchestrator | 2025-02-10 09:22:27.755310 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-02-10 09:22:27.755320 | orchestrator | Monday 10 February 2025 09:20:08 +0000 (0:00:03.083) 0:02:07.246 ******* 2025-02-10 09:22:27.755330 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:27.755340 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:27.755350 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:27.755365 | orchestrator | 2025-02-10 09:22:27.755376 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-02-10 09:22:27.755387 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:00.530) 0:02:07.777 ******* 2025-02-10 09:22:27.755397 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:27.755407 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:27.755417 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:27.755428 | orchestrator | 2025-02-10 09:22:27.755438 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-02-10 09:22:27.755448 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:00.772) 0:02:08.550 ******* 2025-02-10 09:22:27.755459 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:27.755469 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:27.755479 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:27.755489 | orchestrator | 2025-02-10 09:22:27.755500 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-02-10 09:22:27.755510 | orchestrator | Monday 10 February 2025 09:20:10 +0000 (0:00:00.475) 0:02:09.025 ******* 2025-02-10 09:22:27.755520 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:22:27.755530 | orchestrator | 2025-02-10 09:22:27.755541 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-02-10 09:22:27.755551 | orchestrator | Monday 10 February 2025 09:20:11 +0000 (0:00:00.771) 0:02:09.797 ******* 2025-02-10 09:22:27.755571 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.755581 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.755592 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.755602 | orchestrator | 2025-02-10 09:22:27.755613 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-02-10 09:22:27.755623 | orchestrator | Monday 10 February 2025 09:20:11 +0000 (0:00:00.467) 0:02:10.264 ******* 2025-02-10 09:22:27.755634 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.755644 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.755671 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.755684 | orchestrator | 2025-02-10 09:22:27.755695 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-02-10 09:22:27.755706 | orchestrator | Monday 10 February 2025 09:20:12 +0000 (0:00:00.443) 0:02:10.707 ******* 2025-02-10 09:22:27.755716 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.755726 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.755737 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.755757 | orchestrator | 2025-02-10 09:22:27.755768 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-02-10 09:22:27.755778 | orchestrator | Monday 10 February 2025 09:20:12 +0000 (0:00:00.452) 0:02:11.160 ******* 2025-02-10 09:22:27.755796 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:27.755807 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:27.755817 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:27.755827 | orchestrator | 2025-02-10 09:22:27.755837 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-02-10 09:22:27.755847 | orchestrator | Monday 10 February 2025 09:20:14 +0000 (0:00:01.795) 0:02:12.956 ******* 2025-02-10 09:22:27.755857 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:22:27.755867 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:22:27.755878 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:22:27.755887 | orchestrator | 2025-02-10 09:22:27.755898 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-10 09:22:27.755908 | orchestrator | 2025-02-10 09:22:27.755918 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-10 09:22:27.755928 | orchestrator | Monday 10 February 2025 09:20:22 +0000 (0:00:08.580) 0:02:21.537 ******* 2025-02-10 09:22:27.755938 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.755948 | orchestrator | 2025-02-10 09:22:27.755958 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-10 09:22:27.755968 | orchestrator | Monday 10 February 2025 09:20:23 +0000 (0:00:00.732) 0:02:22.270 ******* 2025-02-10 09:22:27.755978 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.755988 | orchestrator | 2025-02-10 09:22:27.755998 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-10 09:22:27.756008 | orchestrator | Monday 10 February 2025 09:20:24 +0000 (0:00:00.659) 0:02:22.930 ******* 2025-02-10 09:22:27.756019 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-10 09:22:27.756029 | orchestrator | 2025-02-10 09:22:27.756045 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-10 09:22:27.756056 | orchestrator | Monday 10 February 2025 09:20:25 +0000 (0:00:01.423) 0:02:24.353 ******* 2025-02-10 09:22:27.756067 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756077 | orchestrator | 2025-02-10 09:22:27.756093 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-10 09:22:27.756103 | orchestrator | Monday 10 February 2025 09:20:26 +0000 (0:00:01.102) 0:02:25.456 ******* 2025-02-10 09:22:27.756113 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756123 | orchestrator | 2025-02-10 09:22:27.756134 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-10 09:22:27.756144 | orchestrator | Monday 10 February 2025 09:20:27 +0000 (0:00:00.812) 0:02:26.269 ******* 2025-02-10 09:22:27.756154 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:22:27.756165 | orchestrator | 2025-02-10 09:22:27.756175 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-10 09:22:27.756185 | orchestrator | Monday 10 February 2025 09:20:28 +0000 (0:00:01.009) 0:02:27.279 ******* 2025-02-10 09:22:27.756195 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:22:27.756205 | orchestrator | 2025-02-10 09:22:27.756216 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-10 09:22:27.756226 | orchestrator | Monday 10 February 2025 09:20:29 +0000 (0:00:00.624) 0:02:27.903 ******* 2025-02-10 09:22:27.756236 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756247 | orchestrator | 2025-02-10 09:22:27.756257 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-10 09:22:27.756267 | orchestrator | Monday 10 February 2025 09:20:29 +0000 (0:00:00.542) 0:02:28.446 ******* 2025-02-10 09:22:27.756277 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756287 | orchestrator | 2025-02-10 09:22:27.756297 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-02-10 09:22:27.756308 | orchestrator | 2025-02-10 09:22:27.756318 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-02-10 09:22:27.756328 | orchestrator | Monday 10 February 2025 09:20:30 +0000 (0:00:00.679) 0:02:29.125 ******* 2025-02-10 09:22:27.756343 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-10 09:22:27.756360 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.756370 | orchestrator | 2025-02-10 09:22:27.756380 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-02-10 09:22:27.756390 | orchestrator | Monday 10 February 2025 09:20:30 +0000 (0:00:00.233) 0:02:29.359 ******* 2025-02-10 09:22:27.756401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 09:22:27.756414 | orchestrator | 2025-02-10 09:22:27.756424 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-02-10 09:22:27.756434 | orchestrator | Monday 10 February 2025 09:20:31 +0000 (0:00:00.413) 0:02:29.773 ******* 2025-02-10 09:22:27.756444 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.756454 | orchestrator | 2025-02-10 09:22:27.756464 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-02-10 09:22:27.756474 | orchestrator | Monday 10 February 2025 09:20:32 +0000 (0:00:00.980) 0:02:30.754 ******* 2025-02-10 09:22:27.756484 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.756494 | orchestrator | 2025-02-10 09:22:27.756504 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-02-10 09:22:27.756514 | orchestrator | Monday 10 February 2025 09:20:33 +0000 (0:00:01.711) 0:02:32.465 ******* 2025-02-10 09:22:27.756524 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756534 | orchestrator | 2025-02-10 09:22:27.756545 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-02-10 09:22:27.756555 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:00.773) 0:02:33.239 ******* 2025-02-10 09:22:27.756565 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.756575 | orchestrator | 2025-02-10 09:22:27.756585 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-02-10 09:22:27.756596 | orchestrator | Monday 10 February 2025 09:20:35 +0000 (0:00:00.449) 0:02:33.689 ******* 2025-02-10 09:22:27.756606 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756616 | orchestrator | 2025-02-10 09:22:27.756627 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-02-10 09:22:27.756637 | orchestrator | Monday 10 February 2025 09:20:43 +0000 (0:00:07.971) 0:02:41.661 ******* 2025-02-10 09:22:27.756648 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.756676 | orchestrator | 2025-02-10 09:22:27.756688 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-02-10 09:22:27.756698 | orchestrator | Monday 10 February 2025 09:20:57 +0000 (0:00:13.985) 0:02:55.647 ******* 2025-02-10 09:22:27.756708 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.756717 | orchestrator | 2025-02-10 09:22:27.756728 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-02-10 09:22:27.756738 | orchestrator | 2025-02-10 09:22:27.756748 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-02-10 09:22:27.756763 | orchestrator | Monday 10 February 2025 09:20:57 +0000 (0:00:00.608) 0:02:56.255 ******* 2025-02-10 09:22:27.756773 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.756784 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.756795 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.756805 | orchestrator | 2025-02-10 09:22:27.756819 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-02-10 09:22:27.756830 | orchestrator | Monday 10 February 2025 09:20:58 +0000 (0:00:00.684) 0:02:56.939 ******* 2025-02-10 09:22:27.756840 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.756850 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.756861 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.756871 | orchestrator | 2025-02-10 09:22:27.756881 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-02-10 09:22:27.756891 | orchestrator | Monday 10 February 2025 09:20:58 +0000 (0:00:00.334) 0:02:57.274 ******* 2025-02-10 09:22:27.756908 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:22:27.756925 | orchestrator | 2025-02-10 09:22:27.756935 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-02-10 09:22:27.756946 | orchestrator | Monday 10 February 2025 09:20:59 +0000 (0:00:00.644) 0:02:57.918 ******* 2025-02-10 09:22:27.756956 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.756966 | orchestrator | 2025-02-10 09:22:27.756977 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-02-10 09:22:27.756988 | orchestrator | Monday 10 February 2025 09:21:00 +0000 (0:00:01.114) 0:02:59.033 ******* 2025-02-10 09:22:27.756998 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.757008 | orchestrator | 2025-02-10 09:22:27.757018 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-02-10 09:22:27.757029 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.658) 0:02:59.691 ******* 2025-02-10 09:22:27.757039 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757049 | orchestrator | 2025-02-10 09:22:27.757059 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-02-10 09:22:27.757069 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:00.244) 0:02:59.936 ******* 2025-02-10 09:22:27.757080 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.757090 | orchestrator | 2025-02-10 09:22:27.757100 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-02-10 09:22:27.757110 | orchestrator | Monday 10 February 2025 09:21:02 +0000 (0:00:01.318) 0:03:01.255 ******* 2025-02-10 09:22:27.757120 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757130 | orchestrator | 2025-02-10 09:22:27.757141 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-02-10 09:22:27.757151 | orchestrator | Monday 10 February 2025 09:21:02 +0000 (0:00:00.287) 0:03:01.542 ******* 2025-02-10 09:22:27.757161 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757171 | orchestrator | 2025-02-10 09:22:27.757181 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-02-10 09:22:27.757191 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:00.265) 0:03:01.807 ******* 2025-02-10 09:22:27.757202 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757212 | orchestrator | 2025-02-10 09:22:27.757223 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-02-10 09:22:27.757233 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:00.254) 0:03:02.062 ******* 2025-02-10 09:22:27.757244 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757254 | orchestrator | 2025-02-10 09:22:27.757265 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-02-10 09:22:27.757275 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:00.245) 0:03:02.307 ******* 2025-02-10 09:22:27.757285 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.757294 | orchestrator | 2025-02-10 09:22:27.757304 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-02-10 09:22:27.757315 | orchestrator | Monday 10 February 2025 09:21:15 +0000 (0:00:11.920) 0:03:14.228 ******* 2025-02-10 09:22:27.757325 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-02-10 09:22:27.757335 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-02-10 09:22:27.757346 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-02-10 09:22:27.757356 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-02-10 09:22:27.757366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-02-10 09:22:27.757376 | orchestrator | 2025-02-10 09:22:27.757386 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-02-10 09:22:27.757396 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:41.492) 0:03:55.720 ******* 2025-02-10 09:22:27.757406 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.757423 | orchestrator | 2025-02-10 09:22:27.757433 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-02-10 09:22:27.757443 | orchestrator | Monday 10 February 2025 09:21:58 +0000 (0:00:01.649) 0:03:57.370 ******* 2025-02-10 09:22:27.757453 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.757464 | orchestrator | 2025-02-10 09:22:27.757474 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-02-10 09:22:27.757484 | orchestrator | Monday 10 February 2025 09:21:59 +0000 (0:00:01.087) 0:03:58.457 ******* 2025-02-10 09:22:27.757494 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:22:27.757504 | orchestrator | 2025-02-10 09:22:27.757514 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-02-10 09:22:27.757532 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:01.579) 0:04:00.037 ******* 2025-02-10 09:22:27.757543 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757553 | orchestrator | 2025-02-10 09:22:27.757563 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-02-10 09:22:27.757574 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:00.261) 0:04:00.298 ******* 2025-02-10 09:22:27.757584 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-02-10 09:22:27.757599 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-02-10 09:22:27.757609 | orchestrator | 2025-02-10 09:22:27.757620 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-02-10 09:22:27.757630 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:01.702) 0:04:02.003 ******* 2025-02-10 09:22:27.757641 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.757699 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.757714 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.757724 | orchestrator | 2025-02-10 09:22:27.757734 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-02-10 09:22:27.757744 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:00.565) 0:04:02.569 ******* 2025-02-10 09:22:27.757761 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.757773 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.757783 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.757793 | orchestrator | 2025-02-10 09:22:27.757803 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-02-10 09:22:27.757814 | orchestrator | 2025-02-10 09:22:27.757824 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-02-10 09:22:27.757835 | orchestrator | Monday 10 February 2025 09:22:05 +0000 (0:00:01.165) 0:04:03.734 ******* 2025-02-10 09:22:27.757845 | orchestrator | ok: [testbed-manager] 2025-02-10 09:22:27.757856 | orchestrator | 2025-02-10 09:22:27.757866 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-02-10 09:22:27.757876 | orchestrator | Monday 10 February 2025 09:22:05 +0000 (0:00:00.626) 0:04:04.361 ******* 2025-02-10 09:22:27.757887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-02-10 09:22:27.757897 | orchestrator | 2025-02-10 09:22:27.757907 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-02-10 09:22:27.757918 | orchestrator | Monday 10 February 2025 09:22:06 +0000 (0:00:00.289) 0:04:04.650 ******* 2025-02-10 09:22:27.757928 | orchestrator | changed: [testbed-manager] 2025-02-10 09:22:27.757938 | orchestrator | 2025-02-10 09:22:27.757949 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-02-10 09:22:27.757959 | orchestrator | 2025-02-10 09:22:27.757969 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-02-10 09:22:27.757979 | orchestrator | Monday 10 February 2025 09:22:12 +0000 (0:00:06.082) 0:04:10.732 ******* 2025-02-10 09:22:27.757990 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:22:27.758000 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:22:27.758045 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:22:27.758057 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:22:27.758066 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:22:27.758074 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:22:27.758083 | orchestrator | 2025-02-10 09:22:27.758092 | orchestrator | TASK [Manage labels] *********************************************************** 2025-02-10 09:22:27.758101 | orchestrator | Monday 10 February 2025 09:22:13 +0000 (0:00:00.949) 0:04:11.682 ******* 2025-02-10 09:22:27.758109 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-10 09:22:27.758118 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-10 09:22:27.758128 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-10 09:22:27.758136 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-10 09:22:27.758145 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-10 09:22:27.758154 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-10 09:22:27.758163 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-10 09:22:27.758172 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-10 09:22:27.758181 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-10 09:22:27.758190 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-10 09:22:27.758198 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-10 09:22:27.758207 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-10 09:22:27.758216 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-10 09:22:27.758229 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-10 09:22:27.758238 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-10 09:22:27.758246 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-10 09:22:27.758256 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-10 09:22:27.758265 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-10 09:22:27.758273 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-10 09:22:27.758282 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-10 09:22:27.758291 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-10 09:22:27.758299 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-10 09:22:27.758308 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-10 09:22:27.758317 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-10 09:22:27.758326 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-10 09:22:27.758335 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-10 09:22:27.758343 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-10 09:22:27.758352 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-10 09:22:27.758367 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-10 09:22:27.758377 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-10 09:22:27.758393 | orchestrator | 2025-02-10 09:22:27.758403 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-02-10 09:22:27.758411 | orchestrator | Monday 10 February 2025 09:22:23 +0000 (0:00:10.555) 0:04:22.237 ******* 2025-02-10 09:22:27.758420 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.758429 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.758438 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.758446 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.758455 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.758464 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.758472 | orchestrator | 2025-02-10 09:22:27.758481 | orchestrator | TASK [Manage taints] *********************************************************** 2025-02-10 09:22:27.758490 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:00.582) 0:04:22.820 ******* 2025-02-10 09:22:27.758498 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:22:27.758507 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:22:27.758516 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:22:27.758524 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:22:27.758533 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:22:27.758541 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:22:27.758551 | orchestrator | 2025-02-10 09:22:27.758560 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:22:27.758569 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:22:27.758578 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-02-10 09:22:27.758587 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-02-10 09:22:27.758596 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-02-10 09:22:27.758607 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-10 09:22:27.758616 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-10 09:22:27.758626 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-10 09:22:27.758635 | orchestrator | 2025-02-10 09:22:27.758644 | orchestrator | 2025-02-10 09:22:27.758653 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:22:27.758689 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:00.603) 0:04:23.423 ******* 2025-02-10 09:22:27.758704 | orchestrator | =============================================================================== 2025-02-10 09:22:27.758720 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.74s 2025-02-10 09:22:27.758729 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.49s 2025-02-10 09:22:27.758738 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.71s 2025-02-10 09:22:27.758747 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 13.99s 2025-02-10 09:22:27.758757 | orchestrator | k3s_server_post : Install Cilium --------------------------------------- 11.92s 2025-02-10 09:22:27.758766 | orchestrator | Manage labels ---------------------------------------------------------- 10.56s 2025-02-10 09:22:27.758774 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.58s 2025-02-10 09:22:27.758783 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 7.97s 2025-02-10 09:22:27.758805 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 6.08s 2025-02-10 09:22:27.758814 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.63s 2025-02-10 09:22:27.758822 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.08s 2025-02-10 09:22:27.758832 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.21s 2025-02-10 09:22:27.758842 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.20s 2025-02-10 09:22:27.758850 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 1.93s 2025-02-10 09:22:27.758859 | orchestrator | k3s_prereq : Set same timezone on every Server -------------------------- 1.83s 2025-02-10 09:22:27.758867 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.80s 2025-02-10 09:22:27.758876 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.79s 2025-02-10 09:22:27.758884 | orchestrator | osism.commons.kubectl : Install apt-transport-https package ------------- 1.71s 2025-02-10 09:22:27.758893 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.70s 2025-02-10 09:22:27.758901 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.69s 2025-02-10 09:22:27.758916 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:30.788347 | orchestrator | 2025-02-10 09:22:27 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:30.788484 | orchestrator | 2025-02-10 09:22:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:30.788523 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task d7e8dba8-07a4-412e-a55c-40f7d714903e is in state STARTED 2025-02-10 09:22:30.790830 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:30.797419 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task b996fa9c-021e-4b5e-ada1-4599a5118fa3 is in state STARTED 2025-02-10 09:22:30.799693 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:30.800110 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task 72a2ec26-cbcd-436a-aa4b-ab4fab3f7d08 is in state STARTED 2025-02-10 09:22:30.800756 | orchestrator | 2025-02-10 09:22:30 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:30.800863 | orchestrator | 2025-02-10 09:22:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:33.871403 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task d7e8dba8-07a4-412e-a55c-40f7d714903e is in state SUCCESS 2025-02-10 09:22:33.875848 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:33.880455 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task b996fa9c-021e-4b5e-ada1-4599a5118fa3 is in state STARTED 2025-02-10 09:22:33.880507 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:33.887229 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task 72a2ec26-cbcd-436a-aa4b-ab4fab3f7d08 is in state STARTED 2025-02-10 09:22:33.887288 | orchestrator | 2025-02-10 09:22:33 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:36.940804 | orchestrator | 2025-02-10 09:22:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:36.940967 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:36.947798 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task b996fa9c-021e-4b5e-ada1-4599a5118fa3 is in state STARTED 2025-02-10 09:22:36.947936 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:36.951161 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task 72a2ec26-cbcd-436a-aa4b-ab4fab3f7d08 is in state STARTED 2025-02-10 09:22:36.952305 | orchestrator | 2025-02-10 09:22:36 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:39.994319 | orchestrator | 2025-02-10 09:22:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:39.994570 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:39.995068 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task b996fa9c-021e-4b5e-ada1-4599a5118fa3 is in state SUCCESS 2025-02-10 09:22:39.995099 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:39.995125 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task 72a2ec26-cbcd-436a-aa4b-ab4fab3f7d08 is in state STARTED 2025-02-10 09:22:39.997883 | orchestrator | 2025-02-10 09:22:39 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:43.043189 | orchestrator | 2025-02-10 09:22:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:43.043551 | orchestrator | 2025-02-10 09:22:43 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:43.044584 | orchestrator | 2025-02-10 09:22:43 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:43.044638 | orchestrator | 2025-02-10 09:22:43 | INFO  | Task 72a2ec26-cbcd-436a-aa4b-ab4fab3f7d08 is in state SUCCESS 2025-02-10 09:22:43.044705 | orchestrator | 2025-02-10 09:22:43 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:46.101350 | orchestrator | 2025-02-10 09:22:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:46.101510 | orchestrator | 2025-02-10 09:22:46 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:46.103428 | orchestrator | 2025-02-10 09:22:46 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:49.152771 | orchestrator | 2025-02-10 09:22:46 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:49.152890 | orchestrator | 2025-02-10 09:22:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:49.152915 | orchestrator | 2025-02-10 09:22:49 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:49.154996 | orchestrator | 2025-02-10 09:22:49 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:49.156388 | orchestrator | 2025-02-10 09:22:49 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:49.157245 | orchestrator | 2025-02-10 09:22:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:52.210443 | orchestrator | 2025-02-10 09:22:52 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:52.210755 | orchestrator | 2025-02-10 09:22:52 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:52.210797 | orchestrator | 2025-02-10 09:22:52 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:55.242344 | orchestrator | 2025-02-10 09:22:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:55.242548 | orchestrator | 2025-02-10 09:22:55 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:58.289004 | orchestrator | 2025-02-10 09:22:55 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:58.289179 | orchestrator | 2025-02-10 09:22:55 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:22:58.289200 | orchestrator | 2025-02-10 09:22:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:22:58.289235 | orchestrator | 2025-02-10 09:22:58 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:22:58.290469 | orchestrator | 2025-02-10 09:22:58 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:22:58.290507 | orchestrator | 2025-02-10 09:22:58 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:01.330277 | orchestrator | 2025-02-10 09:22:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:01.330536 | orchestrator | 2025-02-10 09:23:01 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:01.331157 | orchestrator | 2025-02-10 09:23:01 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:01.331194 | orchestrator | 2025-02-10 09:23:01 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:04.380307 | orchestrator | 2025-02-10 09:23:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:04.380475 | orchestrator | 2025-02-10 09:23:04 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:07.427303 | orchestrator | 2025-02-10 09:23:04 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:07.427442 | orchestrator | 2025-02-10 09:23:04 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:07.427458 | orchestrator | 2025-02-10 09:23:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:07.427486 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:10.470097 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:10.470283 | orchestrator | 2025-02-10 09:23:07 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:10.470306 | orchestrator | 2025-02-10 09:23:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:10.470344 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:10.470601 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:10.471538 | orchestrator | 2025-02-10 09:23:10 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:13.513527 | orchestrator | 2025-02-10 09:23:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:13.513767 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:13.523118 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:13.528287 | orchestrator | 2025-02-10 09:23:13 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:16.578570 | orchestrator | 2025-02-10 09:23:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:16.578787 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:16.580716 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:19.627916 | orchestrator | 2025-02-10 09:23:16 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:19.628058 | orchestrator | 2025-02-10 09:23:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:19.628100 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:19.628918 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:22.674628 | orchestrator | 2025-02-10 09:23:19 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:22.674844 | orchestrator | 2025-02-10 09:23:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:22.674884 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:22.675311 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:25.730784 | orchestrator | 2025-02-10 09:23:22 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:25.730934 | orchestrator | 2025-02-10 09:23:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:25.730976 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:25.731990 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:25.735720 | orchestrator | 2025-02-10 09:23:25 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:28.787941 | orchestrator | 2025-02-10 09:23:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:28.788098 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:28.788313 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:28.788344 | orchestrator | 2025-02-10 09:23:28 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:31.833754 | orchestrator | 2025-02-10 09:23:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:31.833937 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:31.838078 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:31.838998 | orchestrator | 2025-02-10 09:23:31 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:34.890500 | orchestrator | 2025-02-10 09:23:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:34.890679 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:34.891527 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:34.891567 | orchestrator | 2025-02-10 09:23:34 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:37.933946 | orchestrator | 2025-02-10 09:23:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:37.934193 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:37.937191 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:37.946421 | orchestrator | 2025-02-10 09:23:37 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:40.986596 | orchestrator | 2025-02-10 09:23:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:40.986816 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:40.987846 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:40.989857 | orchestrator | 2025-02-10 09:23:40 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state STARTED 2025-02-10 09:23:40.990482 | orchestrator | 2025-02-10 09:23:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:44.057808 | orchestrator | 2025-02-10 09:23:44 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:44.058110 | orchestrator | 2025-02-10 09:23:44 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:44.059614 | orchestrator | 2025-02-10 09:23:44 | INFO  | Task 5aefd25b-a85f-4540-9042-1fb56b7c320d is in state SUCCESS 2025-02-10 09:23:44.061800 | orchestrator | 2025-02-10 09:23:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:44.061887 | orchestrator | 2025-02-10 09:23:44.061905 | orchestrator | 2025-02-10 09:23:44.061920 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-02-10 09:23:44.061945 | orchestrator | 2025-02-10 09:23:44.061960 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-10 09:23:44.061974 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.197) 0:00:00.197 ******* 2025-02-10 09:23:44.061990 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-10 09:23:44.062005 | orchestrator | 2025-02-10 09:23:44.062091 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-10 09:23:44.062110 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:00.905) 0:00:01.103 ******* 2025-02-10 09:23:44.062125 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:44.062141 | orchestrator | 2025-02-10 09:23:44.062156 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-02-10 09:23:44.062172 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:01.208) 0:00:02.312 ******* 2025-02-10 09:23:44.062212 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:44.062226 | orchestrator | 2025-02-10 09:23:44.062241 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:23:44.062255 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:23:44.062271 | orchestrator | 2025-02-10 09:23:44.062285 | orchestrator | 2025-02-10 09:23:44.062300 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:23:44.062314 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:00.647) 0:00:02.961 ******* 2025-02-10 09:23:44.062328 | orchestrator | =============================================================================== 2025-02-10 09:23:44.062342 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.21s 2025-02-10 09:23:44.062359 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.91s 2025-02-10 09:23:44.062375 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.65s 2025-02-10 09:23:44.062391 | orchestrator | 2025-02-10 09:23:44.062407 | orchestrator | 2025-02-10 09:23:44.062423 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-10 09:23:44.062438 | orchestrator | 2025-02-10 09:23:44.062454 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-10 09:23:44.062470 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.330) 0:00:00.330 ******* 2025-02-10 09:23:44.062486 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:44.062502 | orchestrator | 2025-02-10 09:23:44.062518 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-10 09:23:44.062559 | orchestrator | Monday 10 February 2025 09:22:30 +0000 (0:00:00.754) 0:00:01.085 ******* 2025-02-10 09:23:44.062575 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:44.062591 | orchestrator | 2025-02-10 09:23:44.062608 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-10 09:23:44.062751 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.655) 0:00:01.740 ******* 2025-02-10 09:23:44.062784 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-10 09:23:44.062808 | orchestrator | 2025-02-10 09:23:44.062827 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-10 09:23:44.062841 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:00.773) 0:00:02.513 ******* 2025-02-10 09:23:44.062856 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:44.062870 | orchestrator | 2025-02-10 09:23:44.062884 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-10 09:23:44.062898 | orchestrator | Monday 10 February 2025 09:22:33 +0000 (0:00:01.481) 0:00:03.995 ******* 2025-02-10 09:23:44.062912 | orchestrator | changed: [testbed-manager] 2025-02-10 09:23:44.062926 | orchestrator | 2025-02-10 09:23:44.062939 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-10 09:23:44.062953 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:00.827) 0:00:04.823 ******* 2025-02-10 09:23:44.062967 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:23:44.062982 | orchestrator | 2025-02-10 09:23:44.062996 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-10 09:23:44.063010 | orchestrator | Monday 10 February 2025 09:22:35 +0000 (0:00:01.059) 0:00:05.882 ******* 2025-02-10 09:23:44.063024 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-10 09:23:44.063038 | orchestrator | 2025-02-10 09:23:44.063052 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-10 09:23:44.063067 | orchestrator | Monday 10 February 2025 09:22:35 +0000 (0:00:00.729) 0:00:06.612 ******* 2025-02-10 09:23:44.063080 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:44.063095 | orchestrator | 2025-02-10 09:23:44.063109 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-10 09:23:44.063123 | orchestrator | Monday 10 February 2025 09:22:36 +0000 (0:00:00.424) 0:00:07.037 ******* 2025-02-10 09:23:44.063137 | orchestrator | ok: [testbed-manager] 2025-02-10 09:23:44.063151 | orchestrator | 2025-02-10 09:23:44.063165 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:23:44.063179 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:23:44.063194 | orchestrator | 2025-02-10 09:23:44.063208 | orchestrator | 2025-02-10 09:23:44.063222 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:23:44.063236 | orchestrator | Monday 10 February 2025 09:22:36 +0000 (0:00:00.294) 0:00:07.332 ******* 2025-02-10 09:23:44.063250 | orchestrator | =============================================================================== 2025-02-10 09:23:44.063264 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.48s 2025-02-10 09:23:44.063278 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.06s 2025-02-10 09:23:44.063292 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.83s 2025-02-10 09:23:44.063318 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2025-02-10 09:23:44.063333 | orchestrator | Get home directory of operator user ------------------------------------- 0.75s 2025-02-10 09:23:44.063347 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.73s 2025-02-10 09:23:44.063361 | orchestrator | Create .kube directory -------------------------------------------------- 0.66s 2025-02-10 09:23:44.063375 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2025-02-10 09:23:44.063389 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2025-02-10 09:23:44.063413 | orchestrator | 2025-02-10 09:23:44.063428 | orchestrator | None 2025-02-10 09:23:44.063442 | orchestrator | 2025-02-10 09:23:44.063456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:23:44.063470 | orchestrator | 2025-02-10 09:23:44.063484 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:23:44.063498 | orchestrator | Monday 10 February 2025 09:20:54 +0000 (0:00:00.491) 0:00:00.491 ******* 2025-02-10 09:23:44.063512 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:44.063526 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:44.063540 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:44.063554 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.063568 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.063581 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.063595 | orchestrator | 2025-02-10 09:23:44.063609 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:23:44.063623 | orchestrator | Monday 10 February 2025 09:20:56 +0000 (0:00:02.653) 0:00:03.145 ******* 2025-02-10 09:23:44.063637 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-02-10 09:23:44.063651 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-02-10 09:23:44.063665 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-02-10 09:23:44.063679 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-02-10 09:23:44.063694 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-02-10 09:23:44.063861 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-02-10 09:23:44.063882 | orchestrator | 2025-02-10 09:23:44.063897 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-02-10 09:23:44.063911 | orchestrator | 2025-02-10 09:23:44.063932 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-02-10 09:23:44.063945 | orchestrator | Monday 10 February 2025 09:20:58 +0000 (0:00:02.067) 0:00:05.213 ******* 2025-02-10 09:23:44.063958 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:23:44.063972 | orchestrator | 2025-02-10 09:23:44.063984 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-02-10 09:23:44.063997 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:02.595) 0:00:07.808 ******* 2025-02-10 09:23:44.064010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064120 | orchestrator | 2025-02-10 09:23:44.064132 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-02-10 09:23:44.064145 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:02.135) 0:00:09.944 ******* 2025-02-10 09:23:44.064157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064243 | orchestrator | 2025-02-10 09:23:44.064256 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-02-10 09:23:44.064268 | orchestrator | Monday 10 February 2025 09:21:06 +0000 (0:00:03.073) 0:00:13.017 ******* 2025-02-10 09:23:44.064281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064365 | orchestrator | 2025-02-10 09:23:44.064377 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-02-10 09:23:44.064390 | orchestrator | Monday 10 February 2025 09:21:08 +0000 (0:00:02.107) 0:00:15.124 ******* 2025-02-10 09:23:44.064402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064495 | orchestrator | 2025-02-10 09:23:44.064508 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-02-10 09:23:44.064520 | orchestrator | Monday 10 February 2025 09:21:11 +0000 (0:00:03.209) 0:00:18.334 ******* 2025-02-10 09:23:44.064532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.064620 | orchestrator | 2025-02-10 09:23:44.064633 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-02-10 09:23:44.064645 | orchestrator | Monday 10 February 2025 09:21:15 +0000 (0:00:03.871) 0:00:22.205 ******* 2025-02-10 09:23:44.064658 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:44.064670 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:44.064683 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.064695 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:44.064740 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.064753 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.064765 | orchestrator | 2025-02-10 09:23:44.064778 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-02-10 09:23:44.064790 | orchestrator | Monday 10 February 2025 09:21:19 +0000 (0:00:04.003) 0:00:26.209 ******* 2025-02-10 09:23:44.064803 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-02-10 09:23:44.064822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-02-10 09:23:44.064835 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-02-10 09:23:44.064847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-02-10 09:23:44.064859 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-02-10 09:23:44.064872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-02-10 09:23:44.064884 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:23:44.064896 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:23:44.064921 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:23:44.064934 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:23:44.064946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:23:44.064959 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-10 09:23:44.064971 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:23:44.064984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:23:44.064997 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:23:44.065009 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:23:44.065029 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:23:44.065041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-10 09:23:44.065054 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:23:44.065067 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:23:44.065080 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:23:44.065092 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:23:44.065104 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:23:44.065116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-10 09:23:44.065129 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:23:44.065141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:23:44.065153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:23:44.065165 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:23:44.065178 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:23:44.065190 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-10 09:23:44.065202 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:23:44.065214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:23:44.065227 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:23:44.065239 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:23:44.065252 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:23:44.065264 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-10 09:23:44.065277 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-10 09:23:44.065289 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-10 09:23:44.065301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-10 09:23:44.065319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-10 09:23:44.065332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-10 09:23:44.065344 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-10 09:23:44.065414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-02-10 09:23:44.065429 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-02-10 09:23:44.065441 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-02-10 09:23:44.065461 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-02-10 09:23:44.065474 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-02-10 09:23:44.065486 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-02-10 09:23:44.065498 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-10 09:23:44.065511 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-10 09:23:44.065524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-10 09:23:44.065536 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-10 09:23:44.065548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-10 09:23:44.065561 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-10 09:23:44.065573 | orchestrator | 2025-02-10 09:23:44.065586 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:23:44.065598 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:19.494) 0:00:45.704 ******* 2025-02-10 09:23:44.065611 | orchestrator | 2025-02-10 09:23:44.065624 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:23:44.065636 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.062) 0:00:45.766 ******* 2025-02-10 09:23:44.065648 | orchestrator | 2025-02-10 09:23:44.065666 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:23:44.065679 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.196) 0:00:45.962 ******* 2025-02-10 09:23:44.065691 | orchestrator | 2025-02-10 09:23:44.065725 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:23:44.065739 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.087) 0:00:46.050 ******* 2025-02-10 09:23:44.065751 | orchestrator | 2025-02-10 09:23:44.065763 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:23:44.065775 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.092) 0:00:46.142 ******* 2025-02-10 09:23:44.065788 | orchestrator | 2025-02-10 09:23:44.065800 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-10 09:23:44.065812 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:00.117) 0:00:46.260 ******* 2025-02-10 09:23:44.065829 | orchestrator | 2025-02-10 09:23:44.065850 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-02-10 09:23:44.065870 | orchestrator | Monday 10 February 2025 09:21:40 +0000 (0:00:00.351) 0:00:46.611 ******* 2025-02-10 09:23:44.065891 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:23:44.065911 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.065924 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:23:44.065937 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:23:44.065949 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.065962 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.065980 | orchestrator | 2025-02-10 09:23:44.065993 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-02-10 09:23:44.066005 | orchestrator | Monday 10 February 2025 09:21:42 +0000 (0:00:02.743) 0:00:49.354 ******* 2025-02-10 09:23:44.066053 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.066068 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:23:44.066080 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:23:44.066093 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.066113 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.066125 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:23:44.066137 | orchestrator | 2025-02-10 09:23:44.066150 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-02-10 09:23:44.066162 | orchestrator | 2025-02-10 09:23:44.066175 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-10 09:23:44.066187 | orchestrator | Monday 10 February 2025 09:21:58 +0000 (0:00:15.249) 0:01:04.603 ******* 2025-02-10 09:23:44.066199 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:23:44.066211 | orchestrator | 2025-02-10 09:23:44.066234 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-10 09:23:44.066248 | orchestrator | Monday 10 February 2025 09:21:59 +0000 (0:00:00.931) 0:01:05.535 ******* 2025-02-10 09:23:44.066262 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:23:44.066276 | orchestrator | 2025-02-10 09:23:44.066290 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-02-10 09:23:44.066304 | orchestrator | Monday 10 February 2025 09:22:00 +0000 (0:00:01.219) 0:01:06.754 ******* 2025-02-10 09:23:44.066318 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.066332 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.066346 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.066360 | orchestrator | 2025-02-10 09:23:44.066373 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-02-10 09:23:44.066387 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:01.423) 0:01:08.178 ******* 2025-02-10 09:23:44.066401 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.066415 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.066429 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.066443 | orchestrator | 2025-02-10 09:23:44.066457 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-02-10 09:23:44.066471 | orchestrator | Monday 10 February 2025 09:22:02 +0000 (0:00:00.451) 0:01:08.630 ******* 2025-02-10 09:23:44.066485 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.066498 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.066512 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.066526 | orchestrator | 2025-02-10 09:23:44.066540 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-02-10 09:23:44.066553 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:01.120) 0:01:09.751 ******* 2025-02-10 09:23:44.066567 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.066581 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.066594 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.066608 | orchestrator | 2025-02-10 09:23:44.066622 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-02-10 09:23:44.066636 | orchestrator | Monday 10 February 2025 09:22:04 +0000 (0:00:01.156) 0:01:10.908 ******* 2025-02-10 09:23:44.066649 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.066663 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.066677 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.066690 | orchestrator | 2025-02-10 09:23:44.066779 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-02-10 09:23:44.066797 | orchestrator | Monday 10 February 2025 09:22:05 +0000 (0:00:01.425) 0:01:12.333 ******* 2025-02-10 09:23:44.066811 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.066826 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.066840 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.066854 | orchestrator | 2025-02-10 09:23:44.066868 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-02-10 09:23:44.066882 | orchestrator | Monday 10 February 2025 09:22:06 +0000 (0:00:00.970) 0:01:13.304 ******* 2025-02-10 09:23:44.066896 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.066910 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.066933 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.066947 | orchestrator | 2025-02-10 09:23:44.066962 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-02-10 09:23:44.066981 | orchestrator | Monday 10 February 2025 09:22:07 +0000 (0:00:00.625) 0:01:13.929 ******* 2025-02-10 09:23:44.066996 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067010 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067024 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067037 | orchestrator | 2025-02-10 09:23:44.067052 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-02-10 09:23:44.067065 | orchestrator | Monday 10 February 2025 09:22:08 +0000 (0:00:00.943) 0:01:14.872 ******* 2025-02-10 09:23:44.067079 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067093 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067106 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067120 | orchestrator | 2025-02-10 09:23:44.067134 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-02-10 09:23:44.067148 | orchestrator | Monday 10 February 2025 09:22:09 +0000 (0:00:00.931) 0:01:15.804 ******* 2025-02-10 09:23:44.067162 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067176 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067189 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067203 | orchestrator | 2025-02-10 09:23:44.067217 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-02-10 09:23:44.067231 | orchestrator | Monday 10 February 2025 09:22:09 +0000 (0:00:00.577) 0:01:16.381 ******* 2025-02-10 09:23:44.067245 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067259 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067272 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067286 | orchestrator | 2025-02-10 09:23:44.067300 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-02-10 09:23:44.067313 | orchestrator | Monday 10 February 2025 09:22:10 +0000 (0:00:00.725) 0:01:17.107 ******* 2025-02-10 09:23:44.067327 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067341 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067355 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067369 | orchestrator | 2025-02-10 09:23:44.067383 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-02-10 09:23:44.067397 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:00.670) 0:01:17.777 ******* 2025-02-10 09:23:44.067411 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067424 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067438 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067452 | orchestrator | 2025-02-10 09:23:44.067466 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-02-10 09:23:44.067480 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:00.386) 0:01:18.164 ******* 2025-02-10 09:23:44.067494 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067507 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067521 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067535 | orchestrator | 2025-02-10 09:23:44.067549 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-02-10 09:23:44.067570 | orchestrator | Monday 10 February 2025 09:22:12 +0000 (0:00:00.791) 0:01:18.956 ******* 2025-02-10 09:23:44.067584 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067603 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067618 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067632 | orchestrator | 2025-02-10 09:23:44.067646 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-02-10 09:23:44.067660 | orchestrator | Monday 10 February 2025 09:22:14 +0000 (0:00:01.539) 0:01:20.495 ******* 2025-02-10 09:23:44.067674 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067697 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067792 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067819 | orchestrator | 2025-02-10 09:23:44.067833 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-02-10 09:23:44.067847 | orchestrator | Monday 10 February 2025 09:22:15 +0000 (0:00:01.670) 0:01:22.166 ******* 2025-02-10 09:23:44.067861 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.067875 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.067889 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.067901 | orchestrator | 2025-02-10 09:23:44.067913 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-10 09:23:44.067926 | orchestrator | Monday 10 February 2025 09:22:16 +0000 (0:00:01.066) 0:01:23.233 ******* 2025-02-10 09:23:44.067938 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:23:44.067951 | orchestrator | 2025-02-10 09:23:44.067963 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-02-10 09:23:44.067975 | orchestrator | Monday 10 February 2025 09:22:20 +0000 (0:00:03.237) 0:01:26.470 ******* 2025-02-10 09:23:44.067988 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.068000 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.068012 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.068025 | orchestrator | 2025-02-10 09:23:44.068037 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-02-10 09:23:44.068049 | orchestrator | Monday 10 February 2025 09:22:21 +0000 (0:00:01.751) 0:01:28.221 ******* 2025-02-10 09:23:44.068061 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.068074 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.068086 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.068098 | orchestrator | 2025-02-10 09:23:44.068110 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-02-10 09:23:44.068123 | orchestrator | Monday 10 February 2025 09:22:22 +0000 (0:00:00.796) 0:01:29.018 ******* 2025-02-10 09:23:44.068135 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.068147 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.068159 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.068172 | orchestrator | 2025-02-10 09:23:44.068184 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-02-10 09:23:44.068196 | orchestrator | Monday 10 February 2025 09:22:23 +0000 (0:00:00.795) 0:01:29.813 ******* 2025-02-10 09:23:44.068208 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.068221 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.068233 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.068246 | orchestrator | 2025-02-10 09:23:44.068258 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-02-10 09:23:44.068271 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:00.782) 0:01:30.595 ******* 2025-02-10 09:23:44.068283 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.068295 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.068307 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.068320 | orchestrator | 2025-02-10 09:23:44.068337 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-02-10 09:23:44.068350 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:00.408) 0:01:31.004 ******* 2025-02-10 09:23:44.068362 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.068374 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.068387 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.068399 | orchestrator | 2025-02-10 09:23:44.068412 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-02-10 09:23:44.068424 | orchestrator | Monday 10 February 2025 09:22:25 +0000 (0:00:00.523) 0:01:31.527 ******* 2025-02-10 09:23:44.068436 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.068448 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.068461 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.068473 | orchestrator | 2025-02-10 09:23:44.068485 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-02-10 09:23:44.068504 | orchestrator | Monday 10 February 2025 09:22:26 +0000 (0:00:00.891) 0:01:32.418 ******* 2025-02-10 09:23:44.068517 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.068529 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.068541 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.068554 | orchestrator | 2025-02-10 09:23:44.068566 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-10 09:23:44.068579 | orchestrator | Monday 10 February 2025 09:22:26 +0000 (0:00:00.906) 0:01:33.325 ******* 2025-02-10 09:23:44.068592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068789 | orchestrator | 2025-02-10 09:23:44.068802 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-10 09:23:44.068814 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:02.129) 0:01:35.454 ******* 2025-02-10 09:23:44.068827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.068959 | orchestrator | 2025-02-10 09:23:44.068971 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-10 09:23:44.068984 | orchestrator | Monday 10 February 2025 09:22:35 +0000 (0:00:06.172) 0:01:41.627 ******* 2025-02-10 09:23:44.068996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.069132 | orchestrator | 2025-02-10 09:23:44.069144 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:23:44.069157 | orchestrator | Monday 10 February 2025 09:22:38 +0000 (0:00:03.094) 0:01:44.721 ******* 2025-02-10 09:23:44.069169 | orchestrator | 2025-02-10 09:23:44.069182 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:23:44.069194 | orchestrator | Monday 10 February 2025 09:22:38 +0000 (0:00:00.120) 0:01:44.841 ******* 2025-02-10 09:23:44.069206 | orchestrator | 2025-02-10 09:23:44.069219 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:23:44.069231 | orchestrator | Monday 10 February 2025 09:22:38 +0000 (0:00:00.089) 0:01:44.930 ******* 2025-02-10 09:23:44.069243 | orchestrator | 2025-02-10 09:23:44.069256 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-10 09:23:44.069268 | orchestrator | Monday 10 February 2025 09:22:38 +0000 (0:00:00.148) 0:01:45.079 ******* 2025-02-10 09:23:44.069280 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.069293 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.069305 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.069317 | orchestrator | 2025-02-10 09:23:44.069330 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-10 09:23:44.069342 | orchestrator | Monday 10 February 2025 09:22:46 +0000 (0:00:08.099) 0:01:53.178 ******* 2025-02-10 09:23:44.069354 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.069367 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.069379 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.069391 | orchestrator | 2025-02-10 09:23:44.069404 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-10 09:23:44.069416 | orchestrator | Monday 10 February 2025 09:22:49 +0000 (0:00:03.032) 0:01:56.211 ******* 2025-02-10 09:23:44.069428 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.069441 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.069453 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.069465 | orchestrator | 2025-02-10 09:23:44.069478 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-10 09:23:44.069490 | orchestrator | Monday 10 February 2025 09:22:56 +0000 (0:00:06.947) 0:02:03.159 ******* 2025-02-10 09:23:44.069502 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.069515 | orchestrator | 2025-02-10 09:23:44.069527 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-10 09:23:44.069539 | orchestrator | Monday 10 February 2025 09:22:56 +0000 (0:00:00.112) 0:02:03.271 ******* 2025-02-10 09:23:44.069552 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.069564 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.069576 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.069588 | orchestrator | 2025-02-10 09:23:44.069606 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-10 09:23:44.069618 | orchestrator | Monday 10 February 2025 09:22:57 +0000 (0:00:00.794) 0:02:04.066 ******* 2025-02-10 09:23:44.069631 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.069643 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.069655 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.069667 | orchestrator | 2025-02-10 09:23:44.069680 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-10 09:23:44.069692 | orchestrator | Monday 10 February 2025 09:22:58 +0000 (0:00:00.756) 0:02:04.822 ******* 2025-02-10 09:23:44.069727 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.069741 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.069760 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.069772 | orchestrator | 2025-02-10 09:23:44.069789 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-10 09:23:44.069801 | orchestrator | Monday 10 February 2025 09:22:59 +0000 (0:00:00.824) 0:02:05.647 ******* 2025-02-10 09:23:44.069814 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.069826 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.069844 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.069856 | orchestrator | 2025-02-10 09:23:44.069869 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-10 09:23:44.069881 | orchestrator | Monday 10 February 2025 09:22:59 +0000 (0:00:00.732) 0:02:06.379 ******* 2025-02-10 09:23:44.069893 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.069906 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.069918 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.069930 | orchestrator | 2025-02-10 09:23:44.069943 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-10 09:23:44.069955 | orchestrator | Monday 10 February 2025 09:23:01 +0000 (0:00:01.537) 0:02:07.916 ******* 2025-02-10 09:23:44.069968 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.069980 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.069992 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.070004 | orchestrator | 2025-02-10 09:23:44.070066 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-02-10 09:23:44.070083 | orchestrator | Monday 10 February 2025 09:23:02 +0000 (0:00:01.063) 0:02:08.980 ******* 2025-02-10 09:23:44.070095 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.070107 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.070120 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.070132 | orchestrator | 2025-02-10 09:23:44.070144 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-10 09:23:44.070157 | orchestrator | Monday 10 February 2025 09:23:03 +0000 (0:00:00.588) 0:02:09.568 ******* 2025-02-10 09:23:44.070170 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070203 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070216 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070234 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070255 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070275 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070288 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070301 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070313 | orchestrator | 2025-02-10 09:23:44.070326 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-10 09:23:44.070339 | orchestrator | Monday 10 February 2025 09:23:04 +0000 (0:00:01.810) 0:02:11.379 ******* 2025-02-10 09:23:44.070351 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070365 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070453 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070484 | orchestrator | 2025-02-10 09:23:44.070497 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-10 09:23:44.070510 | orchestrator | Monday 10 February 2025 09:23:10 +0000 (0:00:05.567) 0:02:16.947 ******* 2025-02-10 09:23:44.070524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070537 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070549 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070598 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070611 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070629 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070642 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:23:44.070654 | orchestrator | 2025-02-10 09:23:44.070667 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:23:44.070679 | orchestrator | Monday 10 February 2025 09:23:14 +0000 (0:00:03.875) 0:02:20.822 ******* 2025-02-10 09:23:44.070692 | orchestrator | 2025-02-10 09:23:44.070758 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:23:44.070773 | orchestrator | Monday 10 February 2025 09:23:14 +0000 (0:00:00.349) 0:02:21.171 ******* 2025-02-10 09:23:44.070785 | orchestrator | 2025-02-10 09:23:44.070798 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-10 09:23:44.070810 | orchestrator | Monday 10 February 2025 09:23:14 +0000 (0:00:00.062) 0:02:21.234 ******* 2025-02-10 09:23:44.070823 | orchestrator | 2025-02-10 09:23:44.070835 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-10 09:23:44.070848 | orchestrator | Monday 10 February 2025 09:23:14 +0000 (0:00:00.064) 0:02:21.299 ******* 2025-02-10 09:23:44.070860 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.070873 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.070885 | orchestrator | 2025-02-10 09:23:44.070897 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-10 09:23:44.070909 | orchestrator | Monday 10 February 2025 09:23:22 +0000 (0:00:07.406) 0:02:28.705 ******* 2025-02-10 09:23:44.070922 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.070934 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.070947 | orchestrator | 2025-02-10 09:23:44.070959 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-10 09:23:44.070971 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:06.522) 0:02:35.228 ******* 2025-02-10 09:23:44.070984 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:23:44.070996 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:23:44.071008 | orchestrator | 2025-02-10 09:23:44.071021 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-10 09:23:44.071121 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:07.033) 0:02:42.261 ******* 2025-02-10 09:23:44.071137 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:23:44.071150 | orchestrator | 2025-02-10 09:23:44.071163 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-10 09:23:44.071185 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.148) 0:02:42.410 ******* 2025-02-10 09:23:44.071198 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.071210 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.071223 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.071235 | orchestrator | 2025-02-10 09:23:44.071248 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-10 09:23:44.071260 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.982) 0:02:43.392 ******* 2025-02-10 09:23:44.071272 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.071283 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.071293 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.071304 | orchestrator | 2025-02-10 09:23:44.071318 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-10 09:23:44.071329 | orchestrator | Monday 10 February 2025 09:23:37 +0000 (0:00:00.718) 0:02:44.111 ******* 2025-02-10 09:23:44.071339 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.071349 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.071359 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.071370 | orchestrator | 2025-02-10 09:23:44.071380 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-10 09:23:44.071390 | orchestrator | Monday 10 February 2025 09:23:38 +0000 (0:00:01.002) 0:02:45.114 ******* 2025-02-10 09:23:44.071400 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:23:44.071410 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:23:44.071420 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:23:44.071431 | orchestrator | 2025-02-10 09:23:44.071441 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-10 09:23:44.071451 | orchestrator | Monday 10 February 2025 09:23:39 +0000 (0:00:00.680) 0:02:45.794 ******* 2025-02-10 09:23:44.071461 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.071471 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.071481 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.071491 | orchestrator | 2025-02-10 09:23:44.071501 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-10 09:23:44.071511 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:00.762) 0:02:46.557 ******* 2025-02-10 09:23:44.071603 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:23:44.071615 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:23:44.071625 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:23:44.071636 | orchestrator | 2025-02-10 09:23:44.071646 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:23:44.071657 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-02-10 09:23:44.071669 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-10 09:23:44.071679 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-10 09:23:44.071696 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:23:47.096929 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:23:47.097058 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:23:47.097102 | orchestrator | 2025-02-10 09:23:47.097117 | orchestrator | 2025-02-10 09:23:47.097132 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:23:47.097148 | orchestrator | Monday 10 February 2025 09:23:41 +0000 (0:00:01.824) 0:02:48.381 ******* 2025-02-10 09:23:47.097198 | orchestrator | =============================================================================== 2025-02-10 09:23:47.097214 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.49s 2025-02-10 09:23:47.097228 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.51s 2025-02-10 09:23:47.097243 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 15.25s 2025-02-10 09:23:47.097257 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.98s 2025-02-10 09:23:47.097271 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.55s 2025-02-10 09:23:47.097285 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.17s 2025-02-10 09:23:47.097299 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.57s 2025-02-10 09:23:47.097313 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 4.00s 2025-02-10 09:23:47.097327 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.88s 2025-02-10 09:23:47.097341 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 3.87s 2025-02-10 09:23:47.097355 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 3.24s 2025-02-10 09:23:47.097369 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.21s 2025-02-10 09:23:47.097383 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2025-02-10 09:23:47.097396 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.07s 2025-02-10 09:23:47.097410 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.74s 2025-02-10 09:23:47.097424 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.65s 2025-02-10 09:23:47.097441 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.60s 2025-02-10 09:23:47.097457 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.14s 2025-02-10 09:23:47.097473 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.13s 2025-02-10 09:23:47.097489 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.11s 2025-02-10 09:23:47.097529 | orchestrator | 2025-02-10 09:23:47 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:47.098671 | orchestrator | 2025-02-10 09:23:47 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:50.154590 | orchestrator | 2025-02-10 09:23:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:50.154790 | orchestrator | 2025-02-10 09:23:50 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:50.161144 | orchestrator | 2025-02-10 09:23:50 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:53.225251 | orchestrator | 2025-02-10 09:23:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:53.225404 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:56.274371 | orchestrator | 2025-02-10 09:23:53 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:56.274527 | orchestrator | 2025-02-10 09:23:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:56.274570 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:56.275652 | orchestrator | 2025-02-10 09:23:56 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:23:59.330479 | orchestrator | 2025-02-10 09:23:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:23:59.330639 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:23:59.333247 | orchestrator | 2025-02-10 09:23:59 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:02.374559 | orchestrator | 2025-02-10 09:23:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:02.374707 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:05.415930 | orchestrator | 2025-02-10 09:24:02 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:05.417060 | orchestrator | 2025-02-10 09:24:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:05.417137 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:08.455742 | orchestrator | 2025-02-10 09:24:05 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:08.455863 | orchestrator | 2025-02-10 09:24:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:08.455893 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:08.457334 | orchestrator | 2025-02-10 09:24:08 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:11.514917 | orchestrator | 2025-02-10 09:24:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:11.515085 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:11.516367 | orchestrator | 2025-02-10 09:24:11 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:14.573188 | orchestrator | 2025-02-10 09:24:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:14.573361 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:17.624065 | orchestrator | 2025-02-10 09:24:14 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:17.624210 | orchestrator | 2025-02-10 09:24:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:17.624250 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:17.624460 | orchestrator | 2025-02-10 09:24:17 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:20.672657 | orchestrator | 2025-02-10 09:24:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:20.672857 | orchestrator | 2025-02-10 09:24:20 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:23.735984 | orchestrator | 2025-02-10 09:24:20 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:23.736125 | orchestrator | 2025-02-10 09:24:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:23.736165 | orchestrator | 2025-02-10 09:24:23 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:23.736706 | orchestrator | 2025-02-10 09:24:23 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:26.787813 | orchestrator | 2025-02-10 09:24:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:26.787967 | orchestrator | 2025-02-10 09:24:26 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:26.789211 | orchestrator | 2025-02-10 09:24:26 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:26.789704 | orchestrator | 2025-02-10 09:24:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:29.841152 | orchestrator | 2025-02-10 09:24:29 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:29.842124 | orchestrator | 2025-02-10 09:24:29 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:32.900198 | orchestrator | 2025-02-10 09:24:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:32.900369 | orchestrator | 2025-02-10 09:24:32 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:32.903447 | orchestrator | 2025-02-10 09:24:32 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:35.959347 | orchestrator | 2025-02-10 09:24:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:35.959513 | orchestrator | 2025-02-10 09:24:35 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:35.960821 | orchestrator | 2025-02-10 09:24:35 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:39.014673 | orchestrator | 2025-02-10 09:24:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:39.014836 | orchestrator | 2025-02-10 09:24:39 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:42.067757 | orchestrator | 2025-02-10 09:24:39 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:42.067870 | orchestrator | 2025-02-10 09:24:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:42.067894 | orchestrator | 2025-02-10 09:24:42 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:42.068263 | orchestrator | 2025-02-10 09:24:42 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:42.068404 | orchestrator | 2025-02-10 09:24:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:45.111104 | orchestrator | 2025-02-10 09:24:45 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:48.165388 | orchestrator | 2025-02-10 09:24:45 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:48.165627 | orchestrator | 2025-02-10 09:24:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:48.165670 | orchestrator | 2025-02-10 09:24:48 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:48.165967 | orchestrator | 2025-02-10 09:24:48 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:48.166003 | orchestrator | 2025-02-10 09:24:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:51.237991 | orchestrator | 2025-02-10 09:24:51 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:51.239646 | orchestrator | 2025-02-10 09:24:51 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:51.240035 | orchestrator | 2025-02-10 09:24:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:54.286836 | orchestrator | 2025-02-10 09:24:54 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:54.287175 | orchestrator | 2025-02-10 09:24:54 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:24:57.334455 | orchestrator | 2025-02-10 09:24:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:24:57.334608 | orchestrator | 2025-02-10 09:24:57 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:24:57.339696 | orchestrator | 2025-02-10 09:24:57 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:00.380106 | orchestrator | 2025-02-10 09:24:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:00.380239 | orchestrator | 2025-02-10 09:25:00 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:00.380923 | orchestrator | 2025-02-10 09:25:00 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:03.423681 | orchestrator | 2025-02-10 09:25:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:03.423833 | orchestrator | 2025-02-10 09:25:03 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:06.473016 | orchestrator | 2025-02-10 09:25:03 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:06.473087 | orchestrator | 2025-02-10 09:25:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:06.473140 | orchestrator | 2025-02-10 09:25:06 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:06.474446 | orchestrator | 2025-02-10 09:25:06 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:09.532576 | orchestrator | 2025-02-10 09:25:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:09.532710 | orchestrator | 2025-02-10 09:25:09 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:09.533048 | orchestrator | 2025-02-10 09:25:09 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:12.575901 | orchestrator | 2025-02-10 09:25:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:12.576003 | orchestrator | 2025-02-10 09:25:12 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:15.621958 | orchestrator | 2025-02-10 09:25:12 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:15.622109 | orchestrator | 2025-02-10 09:25:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:15.622179 | orchestrator | 2025-02-10 09:25:15 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:15.623973 | orchestrator | 2025-02-10 09:25:15 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:18.669205 | orchestrator | 2025-02-10 09:25:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:18.669323 | orchestrator | 2025-02-10 09:25:18 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:18.675345 | orchestrator | 2025-02-10 09:25:18 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:21.712943 | orchestrator | 2025-02-10 09:25:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:21.713110 | orchestrator | 2025-02-10 09:25:21 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:24.759642 | orchestrator | 2025-02-10 09:25:21 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:24.759790 | orchestrator | 2025-02-10 09:25:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:24.759824 | orchestrator | 2025-02-10 09:25:24 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:24.762333 | orchestrator | 2025-02-10 09:25:24 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:27.817141 | orchestrator | 2025-02-10 09:25:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:27.817429 | orchestrator | 2025-02-10 09:25:27 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:27.817605 | orchestrator | 2025-02-10 09:25:27 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:27.817637 | orchestrator | 2025-02-10 09:25:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:30.864900 | orchestrator | 2025-02-10 09:25:30 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:33.915232 | orchestrator | 2025-02-10 09:25:30 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:33.915354 | orchestrator | 2025-02-10 09:25:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:33.915382 | orchestrator | 2025-02-10 09:25:33 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:33.915968 | orchestrator | 2025-02-10 09:25:33 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:36.953092 | orchestrator | 2025-02-10 09:25:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:36.953214 | orchestrator | 2025-02-10 09:25:36 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:40.006625 | orchestrator | 2025-02-10 09:25:36 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:40.006825 | orchestrator | 2025-02-10 09:25:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:40.006869 | orchestrator | 2025-02-10 09:25:40 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:40.007586 | orchestrator | 2025-02-10 09:25:40 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:43.059143 | orchestrator | 2025-02-10 09:25:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:43.059307 | orchestrator | 2025-02-10 09:25:43 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:46.111634 | orchestrator | 2025-02-10 09:25:43 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:46.111837 | orchestrator | 2025-02-10 09:25:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:46.111882 | orchestrator | 2025-02-10 09:25:46 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:46.115942 | orchestrator | 2025-02-10 09:25:46 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:49.166226 | orchestrator | 2025-02-10 09:25:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:49.166390 | orchestrator | 2025-02-10 09:25:49 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:52.213388 | orchestrator | 2025-02-10 09:25:49 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:52.213537 | orchestrator | 2025-02-10 09:25:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:52.213600 | orchestrator | 2025-02-10 09:25:52 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:52.213872 | orchestrator | 2025-02-10 09:25:52 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:55.271826 | orchestrator | 2025-02-10 09:25:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:55.271995 | orchestrator | 2025-02-10 09:25:55 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:58.331190 | orchestrator | 2025-02-10 09:25:55 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:25:58.331335 | orchestrator | 2025-02-10 09:25:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:25:58.331411 | orchestrator | 2025-02-10 09:25:58 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:25:58.331735 | orchestrator | 2025-02-10 09:25:58 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:01.390744 | orchestrator | 2025-02-10 09:25:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:01.391038 | orchestrator | 2025-02-10 09:26:01 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:01.391371 | orchestrator | 2025-02-10 09:26:01 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:04.437545 | orchestrator | 2025-02-10 09:26:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:04.437691 | orchestrator | 2025-02-10 09:26:04 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:07.489931 | orchestrator | 2025-02-10 09:26:04 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:07.490165 | orchestrator | 2025-02-10 09:26:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:07.490209 | orchestrator | 2025-02-10 09:26:07 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:07.491122 | orchestrator | 2025-02-10 09:26:07 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:10.553987 | orchestrator | 2025-02-10 09:26:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:10.554200 | orchestrator | 2025-02-10 09:26:10 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:10.554906 | orchestrator | 2025-02-10 09:26:10 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:13.611387 | orchestrator | 2025-02-10 09:26:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:13.611570 | orchestrator | 2025-02-10 09:26:13 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:16.658573 | orchestrator | 2025-02-10 09:26:13 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:16.658716 | orchestrator | 2025-02-10 09:26:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:16.658756 | orchestrator | 2025-02-10 09:26:16 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:16.659074 | orchestrator | 2025-02-10 09:26:16 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:19.699708 | orchestrator | 2025-02-10 09:26:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:19.699905 | orchestrator | 2025-02-10 09:26:19 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:22.752465 | orchestrator | 2025-02-10 09:26:19 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:22.752706 | orchestrator | 2025-02-10 09:26:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:22.752758 | orchestrator | 2025-02-10 09:26:22 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:25.810264 | orchestrator | 2025-02-10 09:26:22 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:25.810449 | orchestrator | 2025-02-10 09:26:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:25.810628 | orchestrator | 2025-02-10 09:26:25 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:25.810665 | orchestrator | 2025-02-10 09:26:25 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:25.811055 | orchestrator | 2025-02-10 09:26:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:28.861530 | orchestrator | 2025-02-10 09:26:28 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:28.862725 | orchestrator | 2025-02-10 09:26:28 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:31.902987 | orchestrator | 2025-02-10 09:26:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:31.903215 | orchestrator | 2025-02-10 09:26:31 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:31.903987 | orchestrator | 2025-02-10 09:26:31 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:34.964131 | orchestrator | 2025-02-10 09:26:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:34.964296 | orchestrator | 2025-02-10 09:26:34 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:38.002701 | orchestrator | 2025-02-10 09:26:34 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:38.002906 | orchestrator | 2025-02-10 09:26:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:38.002949 | orchestrator | 2025-02-10 09:26:37 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:38.007825 | orchestrator | 2025-02-10 09:26:38 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:41.070171 | orchestrator | 2025-02-10 09:26:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:41.070322 | orchestrator | 2025-02-10 09:26:41 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:44.113731 | orchestrator | 2025-02-10 09:26:41 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:44.113942 | orchestrator | 2025-02-10 09:26:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:44.113984 | orchestrator | 2025-02-10 09:26:44 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:44.114203 | orchestrator | 2025-02-10 09:26:44 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:47.158706 | orchestrator | 2025-02-10 09:26:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:47.158895 | orchestrator | 2025-02-10 09:26:47 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:47.161404 | orchestrator | 2025-02-10 09:26:47 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:50.220839 | orchestrator | 2025-02-10 09:26:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:50.221011 | orchestrator | 2025-02-10 09:26:50 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:53.269626 | orchestrator | 2025-02-10 09:26:50 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:53.269759 | orchestrator | 2025-02-10 09:26:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:53.269822 | orchestrator | 2025-02-10 09:26:53 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:56.309990 | orchestrator | 2025-02-10 09:26:53 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:56.310260 | orchestrator | 2025-02-10 09:26:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:56.310325 | orchestrator | 2025-02-10 09:26:56 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:26:56.311637 | orchestrator | 2025-02-10 09:26:56 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:26:59.375078 | orchestrator | 2025-02-10 09:26:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:26:59.375235 | orchestrator | 2025-02-10 09:26:59 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:02.428664 | orchestrator | 2025-02-10 09:26:59 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:02.428877 | orchestrator | 2025-02-10 09:26:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:02.428939 | orchestrator | 2025-02-10 09:27:02 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:05.469715 | orchestrator | 2025-02-10 09:27:02 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:05.469936 | orchestrator | 2025-02-10 09:27:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:05.469988 | orchestrator | 2025-02-10 09:27:05 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:08.514353 | orchestrator | 2025-02-10 09:27:05 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:08.514459 | orchestrator | 2025-02-10 09:27:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:08.514482 | orchestrator | 2025-02-10 09:27:08 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:08.514966 | orchestrator | 2025-02-10 09:27:08 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:11.566427 | orchestrator | 2025-02-10 09:27:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:11.566582 | orchestrator | 2025-02-10 09:27:11 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:11.569870 | orchestrator | 2025-02-10 09:27:11 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:14.637272 | orchestrator | 2025-02-10 09:27:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:14.637417 | orchestrator | 2025-02-10 09:27:14 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:14.638795 | orchestrator | 2025-02-10 09:27:14 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:17.694911 | orchestrator | 2025-02-10 09:27:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:17.695075 | orchestrator | 2025-02-10 09:27:17 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:20.747030 | orchestrator | 2025-02-10 09:27:17 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:20.747207 | orchestrator | 2025-02-10 09:27:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:20.747264 | orchestrator | 2025-02-10 09:27:20 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state STARTED 2025-02-10 09:27:23.809366 | orchestrator | 2025-02-10 09:27:20 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:23.809534 | orchestrator | 2025-02-10 09:27:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:23.809578 | orchestrator | 2025-02-10 09:27:23 | INFO  | Task ccf8f4bd-9349-494d-b69b-7e63ea35e96c is in state SUCCESS 2025-02-10 09:27:23.813052 | orchestrator | 2025-02-10 09:27:23.813120 | orchestrator | 2025-02-10 09:27:23.813144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:27:23.813167 | orchestrator | 2025-02-10 09:27:23.813840 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:27:23.813905 | orchestrator | Monday 10 February 2025 09:19:25 +0000 (0:00:00.362) 0:00:00.362 ******* 2025-02-10 09:27:23.813932 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.813957 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.814324 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.814356 | orchestrator | 2025-02-10 09:27:23.814407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:27:23.814426 | orchestrator | Monday 10 February 2025 09:19:26 +0000 (0:00:00.582) 0:00:00.944 ******* 2025-02-10 09:27:23.814452 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-02-10 09:27:23.814477 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-02-10 09:27:23.814500 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-02-10 09:27:23.814580 | orchestrator | 2025-02-10 09:27:23.814604 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-02-10 09:27:23.814628 | orchestrator | 2025-02-10 09:27:23.814749 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-10 09:27:23.814985 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:00.974) 0:00:01.919 ******* 2025-02-10 09:27:23.815015 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.815040 | orchestrator | 2025-02-10 09:27:23.815065 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-02-10 09:27:23.815088 | orchestrator | Monday 10 February 2025 09:19:28 +0000 (0:00:01.478) 0:00:03.397 ******* 2025-02-10 09:27:23.818223 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.818329 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.818365 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.818398 | orchestrator | 2025-02-10 09:27:23.818416 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-10 09:27:23.818432 | orchestrator | Monday 10 February 2025 09:19:30 +0000 (0:00:01.642) 0:00:05.040 ******* 2025-02-10 09:27:23.818446 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.818460 | orchestrator | 2025-02-10 09:27:23.818475 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-02-10 09:27:23.818489 | orchestrator | Monday 10 February 2025 09:19:31 +0000 (0:00:01.804) 0:00:06.844 ******* 2025-02-10 09:27:23.818503 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.818517 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.818531 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.818546 | orchestrator | 2025-02-10 09:27:23.818581 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-02-10 09:27:23.818597 | orchestrator | Monday 10 February 2025 09:19:33 +0000 (0:00:01.131) 0:00:07.976 ******* 2025-02-10 09:27:23.818611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:27:23.818626 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:27:23.818640 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:27:23.818654 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:27:23.818669 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-10 09:27:23.818684 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-10 09:27:23.818698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:27:23.818712 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-10 09:27:23.818726 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-10 09:27:23.818740 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-10 09:27:23.818791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-10 09:27:23.818827 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-10 09:27:23.818842 | orchestrator | 2025-02-10 09:27:23.818858 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:27:23.818872 | orchestrator | Monday 10 February 2025 09:19:39 +0000 (0:00:06.201) 0:00:14.178 ******* 2025-02-10 09:27:23.818886 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-10 09:27:23.818901 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-10 09:27:23.818915 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-10 09:27:23.818929 | orchestrator | 2025-02-10 09:27:23.818943 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:27:23.818957 | orchestrator | Monday 10 February 2025 09:19:40 +0000 (0:00:01.495) 0:00:15.674 ******* 2025-02-10 09:27:23.818971 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-10 09:27:23.818986 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-10 09:27:23.819000 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-10 09:27:23.819014 | orchestrator | 2025-02-10 09:27:23.819028 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:27:23.819042 | orchestrator | Monday 10 February 2025 09:19:43 +0000 (0:00:02.712) 0:00:18.386 ******* 2025-02-10 09:27:23.819056 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-02-10 09:27:23.819070 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.819119 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-02-10 09:27:23.819135 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.819150 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-02-10 09:27:23.819164 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.819178 | orchestrator | 2025-02-10 09:27:23.819192 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-02-10 09:27:23.819206 | orchestrator | Monday 10 February 2025 09:19:45 +0000 (0:00:02.198) 0:00:20.584 ******* 2025-02-10 09:27:23.819224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.819243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.819258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.819289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.819305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.819330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.819347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.819362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.819377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.819393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.819415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.819430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.819444 | orchestrator | 2025-02-10 09:27:23.819459 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-02-10 09:27:23.819473 | orchestrator | Monday 10 February 2025 09:19:47 +0000 (0:00:01.950) 0:00:22.535 ******* 2025-02-10 09:27:23.819487 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:27:23.819501 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:27:23.819516 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:27:23.819530 | orchestrator | 2025-02-10 09:27:23.819544 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-02-10 09:27:23.819558 | orchestrator | Monday 10 February 2025 09:19:49 +0000 (0:00:02.088) 0:00:24.623 ******* 2025-02-10 09:27:23.819579 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-02-10 09:27:23.819594 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-02-10 09:27:23.819608 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-02-10 09:27:23.819622 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-02-10 09:27:23.819636 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.819659 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-02-10 09:27:23.819673 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.819687 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-02-10 09:27:23.819701 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.819715 | orchestrator | 2025-02-10 09:27:23.819729 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-02-10 09:27:23.819743 | orchestrator | Monday 10 February 2025 09:19:52 +0000 (0:00:02.583) 0:00:27.207 ******* 2025-02-10 09:27:23.819757 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:27:23.819771 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:27:23.819785 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:27:23.819799 | orchestrator | 2025-02-10 09:27:23.819881 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-02-10 09:27:23.819902 | orchestrator | Monday 10 February 2025 09:19:53 +0000 (0:00:01.454) 0:00:28.661 ******* 2025-02-10 09:27:23.819916 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.819931 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.819945 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.819974 | orchestrator | 2025-02-10 09:27:23.819999 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-02-10 09:27:23.820014 | orchestrator | Monday 10 February 2025 09:19:56 +0000 (0:00:02.473) 0:00:31.134 ******* 2025-02-10 09:27:23.820028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.820043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.820058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.820073 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820111 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.820149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820164 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.820194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.820208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820252 | orchestrator | 2025-02-10 09:27:23.820267 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-02-10 09:27:23.820281 | orchestrator | Monday 10 February 2025 09:19:59 +0000 (0:00:03.684) 0:00:34.818 ******* 2025-02-10 09:27:23.820303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.820374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.820395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.820410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.820433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.820447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.820468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820524 | orchestrator | 2025-02-10 09:27:23.820538 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-02-10 09:27:23.820553 | orchestrator | Monday 10 February 2025 09:20:05 +0000 (0:00:05.797) 0:00:40.616 ******* 2025-02-10 09:27:23.820576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.820613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.820642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.820657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.820678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.820699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.820729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.820759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.820773 | orchestrator | 2025-02-10 09:27:23.820787 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-02-10 09:27:23.820821 | orchestrator | Monday 10 February 2025 09:20:08 +0000 (0:00:02.827) 0:00:43.443 ******* 2025-02-10 09:27:23.820837 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-10 09:27:23.820852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-10 09:27:23.820866 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-10 09:27:23.820887 | orchestrator | 2025-02-10 09:27:23.820902 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-02-10 09:27:23.820916 | orchestrator | Monday 10 February 2025 09:20:13 +0000 (0:00:05.130) 0:00:48.573 ******* 2025-02-10 09:27:23.820930 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-10 09:27:23.820944 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.820959 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-10 09:27:23.820979 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.820994 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-10 09:27:23.821008 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.821022 | orchestrator | 2025-02-10 09:27:23.821036 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-02-10 09:27:23.821050 | orchestrator | Monday 10 February 2025 09:20:15 +0000 (0:00:02.068) 0:00:50.641 ******* 2025-02-10 09:27:23.821064 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.821078 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.821092 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.821106 | orchestrator | 2025-02-10 09:27:23.821120 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-02-10 09:27:23.821134 | orchestrator | Monday 10 February 2025 09:20:16 +0000 (0:00:01.196) 0:00:51.837 ******* 2025-02-10 09:27:23.821148 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-10 09:27:23.821163 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-10 09:27:23.821178 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-10 09:27:23.821192 | orchestrator | 2025-02-10 09:27:23.821206 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-02-10 09:27:23.821221 | orchestrator | Monday 10 February 2025 09:20:20 +0000 (0:00:03.126) 0:00:54.964 ******* 2025-02-10 09:27:23.821235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-10 09:27:23.821249 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-10 09:27:23.821264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-10 09:27:23.821277 | orchestrator | 2025-02-10 09:27:23.821291 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-02-10 09:27:23.821305 | orchestrator | Monday 10 February 2025 09:20:23 +0000 (0:00:03.189) 0:00:58.153 ******* 2025-02-10 09:27:23.821319 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-02-10 09:27:23.821333 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-02-10 09:27:23.821348 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-02-10 09:27:23.821361 | orchestrator | 2025-02-10 09:27:23.821376 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-02-10 09:27:23.821390 | orchestrator | Monday 10 February 2025 09:20:26 +0000 (0:00:03.477) 0:01:01.631 ******* 2025-02-10 09:27:23.821403 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-02-10 09:27:23.821418 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-02-10 09:27:23.821432 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-02-10 09:27:23.821446 | orchestrator | 2025-02-10 09:27:23.821460 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-10 09:27:23.821474 | orchestrator | Monday 10 February 2025 09:20:30 +0000 (0:00:03.446) 0:01:05.077 ******* 2025-02-10 09:27:23.821499 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.821514 | orchestrator | 2025-02-10 09:27:23.821528 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-02-10 09:27:23.821542 | orchestrator | Monday 10 February 2025 09:20:31 +0000 (0:00:01.256) 0:01:06.333 ******* 2025-02-10 09:27:23.821562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.821578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.821600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.821615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.821630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.821644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.821658 | orchestrator | 2025-02-10 09:27:23.821672 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-02-10 09:27:23.821687 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:02.718) 0:01:09.051 ******* 2025-02-10 09:27:23.821709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.821728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.821743 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.821758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.821780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.821796 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.821827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.821842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.821857 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.821871 | orchestrator | 2025-02-10 09:27:23.821886 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-02-10 09:27:23.821900 | orchestrator | Monday 10 February 2025 09:20:35 +0000 (0:00:01.221) 0:01:10.273 ******* 2025-02-10 09:27:23.821926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.821941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.821955 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.821970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.821984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.821999 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.822056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-10 09:27:23.822075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.822090 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.822105 | orchestrator | 2025-02-10 09:27:23.822120 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-02-10 09:27:23.822134 | orchestrator | Monday 10 February 2025 09:20:38 +0000 (0:00:02.960) 0:01:13.234 ******* 2025-02-10 09:27:23.822148 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-10 09:27:23.822169 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-10 09:27:23.822184 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-10 09:27:23.822198 | orchestrator | 2025-02-10 09:27:23.822212 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-02-10 09:27:23.822231 | orchestrator | Monday 10 February 2025 09:20:43 +0000 (0:00:04.811) 0:01:18.045 ******* 2025-02-10 09:27:23.822246 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-10 09:27:23.822260 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.822275 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-10 09:27:23.822289 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.822303 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-10 09:27:23.822317 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.822331 | orchestrator | 2025-02-10 09:27:23.822345 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-02-10 09:27:23.822359 | orchestrator | Monday 10 February 2025 09:20:44 +0000 (0:00:01.450) 0:01:19.496 ******* 2025-02-10 09:27:23.822373 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:27:23.822387 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:27:23.822401 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:27:23.822415 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.822429 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:27:23.822443 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:27:23.822457 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.822475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:27:23.822489 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.822504 | orchestrator | 2025-02-10 09:27:23.822518 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-02-10 09:27:23.822532 | orchestrator | Monday 10 February 2025 09:20:47 +0000 (0:00:02.845) 0:01:22.341 ******* 2025-02-10 09:27:23.822546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.822567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.822582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.822613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.822628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-10 09:27:23.822643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-10 09:27:23.822657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.822679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.822694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.822717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.822737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-10 09:27:23.822757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928', '__omit_place_holder__da14134c296bf84f7ced98bc71f9e2030de43928'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-10 09:27:23.822772 | orchestrator | 2025-02-10 09:27:23.822787 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-02-10 09:27:23.822828 | orchestrator | Monday 10 February 2025 09:20:50 +0000 (0:00:03.370) 0:01:25.712 ******* 2025-02-10 09:27:23.822845 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.822859 | orchestrator | 2025-02-10 09:27:23.822873 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-02-10 09:27:23.822887 | orchestrator | Monday 10 February 2025 09:20:51 +0000 (0:00:01.093) 0:01:26.805 ******* 2025-02-10 09:27:23.822902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-10 09:27:23.822950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.822975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.822990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-10 09:27:23.823030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.823045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-10 09:27:23.823117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.823132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823167 | orchestrator | 2025-02-10 09:27:23.823181 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-02-10 09:27:23.823195 | orchestrator | Monday 10 February 2025 09:20:59 +0000 (0:00:07.919) 0:01:34.725 ******* 2025-02-10 09:27:23.823235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-10 09:27:23.823251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.823279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823309 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.823325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-10 09:27:23.823339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.823354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823389 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.823412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-10 09:27:23.823427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.823468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.823500 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.823514 | orchestrator | 2025-02-10 09:27:23.823529 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-02-10 09:27:23.823543 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:02.034) 0:01:36.760 ******* 2025-02-10 09:27:23.823557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:27:23.823573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:27:23.823587 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.823602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:27:23.823625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:27:23.823639 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.823654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:27:23.823668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-10 09:27:23.823682 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.823696 | orchestrator | 2025-02-10 09:27:23.823710 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-02-10 09:27:23.823725 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:01.716) 0:01:38.476 ******* 2025-02-10 09:27:23.823739 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.823753 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.823767 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.823781 | orchestrator | 2025-02-10 09:27:23.823827 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-02-10 09:27:23.823843 | orchestrator | Monday 10 February 2025 09:21:04 +0000 (0:00:00.945) 0:01:39.422 ******* 2025-02-10 09:27:23.823857 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.823871 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.823886 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.823899 | orchestrator | 2025-02-10 09:27:23.823914 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-02-10 09:27:23.823928 | orchestrator | Monday 10 February 2025 09:21:07 +0000 (0:00:03.099) 0:01:42.521 ******* 2025-02-10 09:27:23.823941 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.823956 | orchestrator | 2025-02-10 09:27:23.823970 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-02-10 09:27:23.823984 | orchestrator | Monday 10 February 2025 09:21:08 +0000 (0:00:01.208) 0:01:43.730 ******* 2025-02-10 09:27:23.823999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.824015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.824090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.824158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824189 | orchestrator | 2025-02-10 09:27:23.824203 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-02-10 09:27:23.824218 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:09.575) 0:01:53.305 ******* 2025-02-10 09:27:23.824249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.824265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824295 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.824328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.824344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824380 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.824405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.824421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.824457 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.824472 | orchestrator | 2025-02-10 09:27:23.824486 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-02-10 09:27:23.824500 | orchestrator | Monday 10 February 2025 09:21:19 +0000 (0:00:01.041) 0:01:54.347 ******* 2025-02-10 09:27:23.824514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:27:23.824529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:27:23.824543 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.824558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:27:23.824577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:27:23.824605 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.824631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:27:23.824646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-10 09:27:23.824661 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.824675 | orchestrator | 2025-02-10 09:27:23.824689 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-02-10 09:27:23.824709 | orchestrator | Monday 10 February 2025 09:21:21 +0000 (0:00:02.330) 0:01:56.677 ******* 2025-02-10 09:27:23.824724 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.824738 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.824753 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.824767 | orchestrator | 2025-02-10 09:27:23.824781 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-02-10 09:27:23.824869 | orchestrator | Monday 10 February 2025 09:21:22 +0000 (0:00:00.653) 0:01:57.330 ******* 2025-02-10 09:27:23.824886 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.824900 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.824914 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.824928 | orchestrator | 2025-02-10 09:27:23.824942 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-02-10 09:27:23.824956 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:01.464) 0:01:58.794 ******* 2025-02-10 09:27:23.824970 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.824994 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.825008 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.825030 | orchestrator | 2025-02-10 09:27:23.825049 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-02-10 09:27:23.825064 | orchestrator | Monday 10 February 2025 09:21:24 +0000 (0:00:00.480) 0:01:59.275 ******* 2025-02-10 09:27:23.825079 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.825093 | orchestrator | 2025-02-10 09:27:23.825107 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-02-10 09:27:23.825121 | orchestrator | Monday 10 February 2025 09:21:25 +0000 (0:00:00.882) 0:02:00.157 ******* 2025-02-10 09:27:23.825146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-10 09:27:23.825171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-10 09:27:23.825184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-10 09:27:23.825197 | orchestrator | 2025-02-10 09:27:23.825210 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-02-10 09:27:23.825223 | orchestrator | Monday 10 February 2025 09:21:28 +0000 (0:00:03.150) 0:02:03.308 ******* 2025-02-10 09:27:23.825242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-10 09:27:23.825262 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.825275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-10 09:27:23.825288 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.825309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-10 09:27:23.825323 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.825336 | orchestrator | 2025-02-10 09:27:23.825360 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-02-10 09:27:23.825373 | orchestrator | Monday 10 February 2025 09:21:30 +0000 (0:00:02.023) 0:02:05.331 ******* 2025-02-10 09:27:23.825385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:27:23.825401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:27:23.825414 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.825427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:27:23.825446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:27:23.825466 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.825479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:27:23.825492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-10 09:27:23.825504 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.825517 | orchestrator | 2025-02-10 09:27:23.825530 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-02-10 09:27:23.825543 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:01.838) 0:02:07.170 ******* 2025-02-10 09:27:23.825555 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.825568 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.825580 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.825593 | orchestrator | 2025-02-10 09:27:23.825606 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-02-10 09:27:23.825618 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:00.330) 0:02:07.500 ******* 2025-02-10 09:27:23.825630 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.825643 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.825655 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.825668 | orchestrator | 2025-02-10 09:27:23.825680 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-02-10 09:27:23.825693 | orchestrator | Monday 10 February 2025 09:21:33 +0000 (0:00:01.100) 0:02:08.601 ******* 2025-02-10 09:27:23.825706 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.825718 | orchestrator | 2025-02-10 09:27:23.825731 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-02-10 09:27:23.825743 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.768) 0:02:09.369 ******* 2025-02-10 09:27:23.825756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.825770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.825897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.825979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.825990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826012 | orchestrator | 2025-02-10 09:27:23.826075 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-02-10 09:27:23.826087 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:04.767) 0:02:14.136 ******* 2025-02-10 09:27:23.826098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.826133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826169 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.826180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.826196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826251 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.826262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.826273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826335 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.826346 | orchestrator | 2025-02-10 09:27:23.826356 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-02-10 09:27:23.826367 | orchestrator | Monday 10 February 2025 09:21:40 +0000 (0:00:01.394) 0:02:15.531 ******* 2025-02-10 09:27:23.826377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:27:23.826387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:27:23.826398 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.826408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:27:23.826419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:27:23.826429 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.826440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:27:23.826450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-10 09:27:23.826461 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.826471 | orchestrator | 2025-02-10 09:27:23.826481 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-02-10 09:27:23.826491 | orchestrator | Monday 10 February 2025 09:21:42 +0000 (0:00:02.077) 0:02:17.609 ******* 2025-02-10 09:27:23.826501 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.826511 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.826527 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.826538 | orchestrator | 2025-02-10 09:27:23.826548 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-02-10 09:27:23.826558 | orchestrator | Monday 10 February 2025 09:21:43 +0000 (0:00:00.643) 0:02:18.252 ******* 2025-02-10 09:27:23.826568 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.826578 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.826589 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.826599 | orchestrator | 2025-02-10 09:27:23.826610 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-02-10 09:27:23.826620 | orchestrator | Monday 10 February 2025 09:21:45 +0000 (0:00:02.228) 0:02:20.481 ******* 2025-02-10 09:27:23.826630 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.826640 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.826650 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.826660 | orchestrator | 2025-02-10 09:27:23.826670 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-02-10 09:27:23.826680 | orchestrator | Monday 10 February 2025 09:21:46 +0000 (0:00:00.616) 0:02:21.097 ******* 2025-02-10 09:27:23.826690 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.826700 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.826711 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.826721 | orchestrator | 2025-02-10 09:27:23.826731 | orchestrator | TASK [include_role : designate] ************************************************ 2025-02-10 09:27:23.826742 | orchestrator | Monday 10 February 2025 09:21:46 +0000 (0:00:00.696) 0:02:21.793 ******* 2025-02-10 09:27:23.826752 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.826762 | orchestrator | 2025-02-10 09:27:23.826773 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-02-10 09:27:23.826788 | orchestrator | Monday 10 February 2025 09:21:48 +0000 (0:00:01.473) 0:02:23.267 ******* 2025-02-10 09:27:23.826816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:27:23.826829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:27:23.826840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:27:23.826925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:27:23.826943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.826993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:27:23.827032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:27:23.827044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827104 | orchestrator | 2025-02-10 09:27:23.827114 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-02-10 09:27:23.827124 | orchestrator | Monday 10 February 2025 09:21:54 +0000 (0:00:06.125) 0:02:29.393 ******* 2025-02-10 09:27:23.827140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:27:23.827157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:27:23.827169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827233 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.827251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:27:23.827262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:27:23.827273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:27:23.827349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:27:23.827360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827381 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.827396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.827452 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.827462 | orchestrator | 2025-02-10 09:27:23.827473 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-02-10 09:27:23.827483 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:01.078) 0:02:30.471 ******* 2025-02-10 09:27:23.827494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:27:23.827504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:27:23.827515 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.827525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:27:23.827536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:27:23.827546 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.827557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:27:23.827567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-10 09:27:23.827577 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.827587 | orchestrator | 2025-02-10 09:27:23.827597 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-02-10 09:27:23.827608 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:02.032) 0:02:32.503 ******* 2025-02-10 09:27:23.827623 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.827634 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.827644 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.827654 | orchestrator | 2025-02-10 09:27:23.827664 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-02-10 09:27:23.827674 | orchestrator | Monday 10 February 2025 09:21:58 +0000 (0:00:00.657) 0:02:33.161 ******* 2025-02-10 09:27:23.827684 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.827695 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.827705 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.827715 | orchestrator | 2025-02-10 09:27:23.827726 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-02-10 09:27:23.827740 | orchestrator | Monday 10 February 2025 09:22:00 +0000 (0:00:01.846) 0:02:35.008 ******* 2025-02-10 09:27:23.827751 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.827761 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.827771 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.827781 | orchestrator | 2025-02-10 09:27:23.827792 | orchestrator | TASK [include_role : glance] *************************************************** 2025-02-10 09:27:23.827817 | orchestrator | Monday 10 February 2025 09:22:00 +0000 (0:00:00.553) 0:02:35.561 ******* 2025-02-10 09:27:23.827828 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.827838 | orchestrator | 2025-02-10 09:27:23.827848 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-02-10 09:27:23.827859 | orchestrator | Monday 10 February 2025 09:22:02 +0000 (0:00:01.757) 0:02:37.319 ******* 2025-02-10 09:27:23.827870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:27:23.827894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.827920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:27:23.827932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.827954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:27:23.827973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.827989 | orchestrator | 2025-02-10 09:27:23.828000 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-02-10 09:27:23.828010 | orchestrator | Monday 10 February 2025 09:22:12 +0000 (0:00:10.117) 0:02:47.436 ******* 2025-02-10 09:27:23.828027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:27:23.828045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.828061 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:27:23.828096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.828107 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:27:23.828145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.828166 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828177 | orchestrator | 2025-02-10 09:27:23.828187 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-02-10 09:27:23.828198 | orchestrator | Monday 10 February 2025 09:22:22 +0000 (0:00:10.194) 0:02:57.631 ******* 2025-02-10 09:27:23.828208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:27:23.828225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:27:23.828236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:27:23.828247 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:27:23.828280 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:27:23.828309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-10 09:27:23.828320 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828331 | orchestrator | 2025-02-10 09:27:23.828342 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-02-10 09:27:23.828352 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:08.837) 0:03:06.468 ******* 2025-02-10 09:27:23.828363 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828373 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828384 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828394 | orchestrator | 2025-02-10 09:27:23.828405 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-02-10 09:27:23.828415 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:00.609) 0:03:07.078 ******* 2025-02-10 09:27:23.828425 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828440 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828451 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828461 | orchestrator | 2025-02-10 09:27:23.828472 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-02-10 09:27:23.828482 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:02.417) 0:03:09.496 ******* 2025-02-10 09:27:23.828492 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828502 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828512 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828522 | orchestrator | 2025-02-10 09:27:23.828532 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-02-10 09:27:23.828542 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:00.368) 0:03:09.864 ******* 2025-02-10 09:27:23.828552 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.828562 | orchestrator | 2025-02-10 09:27:23.828572 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-02-10 09:27:23.828582 | orchestrator | Monday 10 February 2025 09:22:36 +0000 (0:00:01.664) 0:03:11.528 ******* 2025-02-10 09:27:23.828593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:27:23.828604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:27:23.828620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:27:23.828631 | orchestrator | 2025-02-10 09:27:23.828641 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-02-10 09:27:23.828655 | orchestrator | Monday 10 February 2025 09:22:41 +0000 (0:00:04.453) 0:03:15.982 ******* 2025-02-10 09:27:23.828665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:27:23.828682 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:27:23.828703 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:27:23.828724 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828734 | orchestrator | 2025-02-10 09:27:23.828745 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-02-10 09:27:23.828755 | orchestrator | Monday 10 February 2025 09:22:41 +0000 (0:00:00.500) 0:03:16.482 ******* 2025-02-10 09:27:23.828765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:27:23.828779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:27:23.828790 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:27:23.828850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:27:23.828861 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:27:23.828887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-10 09:27:23.828898 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828908 | orchestrator | 2025-02-10 09:27:23.828919 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-02-10 09:27:23.828929 | orchestrator | Monday 10 February 2025 09:22:42 +0000 (0:00:01.097) 0:03:17.579 ******* 2025-02-10 09:27:23.828939 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.828949 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.828960 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.828976 | orchestrator | 2025-02-10 09:27:23.828986 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-02-10 09:27:23.828996 | orchestrator | Monday 10 February 2025 09:22:43 +0000 (0:00:00.550) 0:03:18.130 ******* 2025-02-10 09:27:23.829006 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.829017 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.829027 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.829037 | orchestrator | 2025-02-10 09:27:23.829047 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-02-10 09:27:23.829057 | orchestrator | Monday 10 February 2025 09:22:44 +0000 (0:00:01.321) 0:03:19.452 ******* 2025-02-10 09:27:23.829068 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.829078 | orchestrator | 2025-02-10 09:27:23.829088 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-02-10 09:27:23.829098 | orchestrator | Monday 10 February 2025 09:22:45 +0000 (0:00:01.318) 0:03:20.770 ******* 2025-02-10 09:27:23.829108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.829128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.829139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.829156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.829173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.829184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.829194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.829212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.829224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.829239 | orchestrator | 2025-02-10 09:27:23.829255 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-02-10 09:27:23.829265 | orchestrator | Monday 10 February 2025 09:22:53 +0000 (0:00:08.080) 0:03:28.850 ******* 2025-02-10 09:27:23.829276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.829287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.829297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.829308 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.829325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.829341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.829357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.829367 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.829376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.829385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.829402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.829411 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.829420 | orchestrator | 2025-02-10 09:27:23.829429 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-02-10 09:27:23.829438 | orchestrator | Monday 10 February 2025 09:22:54 +0000 (0:00:00.860) 0:03:29.711 ******* 2025-02-10 09:27:23.829451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829491 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.829500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829539 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.829547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-10 09:27:23.829583 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.829591 | orchestrator | 2025-02-10 09:27:23.829600 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-02-10 09:27:23.829609 | orchestrator | Monday 10 February 2025 09:22:55 +0000 (0:00:01.133) 0:03:30.844 ******* 2025-02-10 09:27:23.829617 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.829626 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.829635 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.829644 | orchestrator | 2025-02-10 09:27:23.829652 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-02-10 09:27:23.829661 | orchestrator | Monday 10 February 2025 09:22:56 +0000 (0:00:00.426) 0:03:31.271 ******* 2025-02-10 09:27:23.829670 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.829678 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.829687 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.829696 | orchestrator | 2025-02-10 09:27:23.829708 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-02-10 09:27:23.829718 | orchestrator | Monday 10 February 2025 09:22:57 +0000 (0:00:01.302) 0:03:32.574 ******* 2025-02-10 09:27:23.829727 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.829736 | orchestrator | 2025-02-10 09:27:23.829744 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-02-10 09:27:23.829753 | orchestrator | Monday 10 February 2025 09:22:58 +0000 (0:00:01.204) 0:03:33.778 ******* 2025-02-10 09:27:23.829773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:27:23.829784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:27:23.829824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:27:23.829835 | orchestrator | 2025-02-10 09:27:23.829844 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-02-10 09:27:23.829852 | orchestrator | Monday 10 February 2025 09:23:04 +0000 (0:00:05.149) 0:03:38.927 ******* 2025-02-10 09:27:23.829861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:27:23.829881 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.829896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:27:23.829912 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.829921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:27:23.829935 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.829944 | orchestrator | 2025-02-10 09:27:23.829952 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-02-10 09:27:23.829965 | orchestrator | Monday 10 February 2025 09:23:05 +0000 (0:00:00.987) 0:03:39.915 ******* 2025-02-10 09:27:23.829974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:27:23.829983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:27:23.829992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:27:23.830002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:27:23.830011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-10 09:27:23.830040 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.830053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:27:23.830067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:27:23.830077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:27:23.830086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:27:23.830095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-10 09:27:23.830104 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.830113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:27:23.830121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:27:23.830134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-10 09:27:23.830144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-10 09:27:23.830153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-10 09:27:23.830161 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.830170 | orchestrator | 2025-02-10 09:27:23.830179 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-02-10 09:27:23.830188 | orchestrator | Monday 10 February 2025 09:23:07 +0000 (0:00:02.124) 0:03:42.040 ******* 2025-02-10 09:27:23.830197 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.830205 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.830214 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.830222 | orchestrator | 2025-02-10 09:27:23.830231 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-02-10 09:27:23.830240 | orchestrator | Monday 10 February 2025 09:23:07 +0000 (0:00:00.608) 0:03:42.649 ******* 2025-02-10 09:27:23.830248 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.830257 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.830270 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.830279 | orchestrator | 2025-02-10 09:27:23.830291 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-02-10 09:27:23.830300 | orchestrator | Monday 10 February 2025 09:23:09 +0000 (0:00:01.777) 0:03:44.427 ******* 2025-02-10 09:27:23.830309 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.830317 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.830326 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.830334 | orchestrator | 2025-02-10 09:27:23.830343 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-02-10 09:27:23.830352 | orchestrator | Monday 10 February 2025 09:23:10 +0000 (0:00:00.676) 0:03:45.103 ******* 2025-02-10 09:27:23.830360 | orchestrator | included: ironic for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.830369 | orchestrator | 2025-02-10 09:27:23.830377 | orchestrator | TASK [haproxy-config : Copying over ironic haproxy config] ********************* 2025-02-10 09:27:23.830386 | orchestrator | Monday 10 February 2025 09:23:11 +0000 (0:00:01.317) 0:03:46.420 ******* 2025-02-10 09:27:23.830395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.830404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.830419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.830428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.830442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.830451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.830470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:27:23.830479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:27:23.830493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:27:23.830507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.830516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:27:23.830525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:27:23.830541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:27:23.830550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:27:23.830563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.830572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:27:23.830586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:27:23.830595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:27:23.830611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:27:23.830620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.830629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:27:23.830638 | orchestrator | 2025-02-10 09:27:23.830647 | orchestrator | TASK [haproxy-config : Add configuration for ironic when using single external frontend] *** 2025-02-10 09:27:23.830656 | orchestrator | Monday 10 February 2025 09:23:21 +0000 (0:00:10.061) 0:03:56.481 ******* 2025-02-10 09:27:23.830670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.830684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.830693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:27:23.830709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:27:23.830718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:27:23.830736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.830753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:27:23.830762 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.830775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.830785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.830799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:27:23.830820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:27:23.830916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:27:23.830928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.830937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.830946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:27:23.830955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.830964 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.830973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:27:23.830992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:27:23.831002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:27:23.831011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.831020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:27:23.831029 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831038 | orchestrator | 2025-02-10 09:27:23.831047 | orchestrator | TASK [haproxy-config : Configuring firewall for ironic] ************************ 2025-02-10 09:27:23.831056 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:01.503) 0:03:57.985 ******* 2025-02-10 09:27:23.831065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:27:23.831073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:27:23.831082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:27:23.831091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:27:23.831100 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:27:23.831122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:27:23.831134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:27:23.831143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:27:23.831156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:27:23.831165 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-10 09:27:23.831184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:27:23.831193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-10 09:27:23.831201 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831210 | orchestrator | 2025-02-10 09:27:23.831219 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL users config] ************* 2025-02-10 09:27:23.831228 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:01.489) 0:03:59.474 ******* 2025-02-10 09:27:23.831237 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831245 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831254 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831263 | orchestrator | 2025-02-10 09:27:23.831271 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL rules config] ************* 2025-02-10 09:27:23.831280 | orchestrator | Monday 10 February 2025 09:23:25 +0000 (0:00:00.535) 0:04:00.010 ******* 2025-02-10 09:27:23.831289 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831297 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831306 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831314 | orchestrator | 2025-02-10 09:27:23.831323 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-02-10 09:27:23.831331 | orchestrator | Monday 10 February 2025 09:23:26 +0000 (0:00:01.709) 0:04:01.719 ******* 2025-02-10 09:27:23.831340 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.831349 | orchestrator | 2025-02-10 09:27:23.831357 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-02-10 09:27:23.831366 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:01.519) 0:04:03.239 ******* 2025-02-10 09:27:23.831375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:27:23.831391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:27:23.831401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:27:23.831424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:27:23.831435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:27:23.831444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:27:23.831458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:27:23.831467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:27:23.831481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:27:23.831490 | orchestrator | 2025-02-10 09:27:23.831499 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-02-10 09:27:23.831508 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:05.151) 0:04:08.391 ******* 2025-02-10 09:27:23.831517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:27:23.831527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:27:23.831540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:27:23.831550 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:27:23.831579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:27:23.831591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:27:23.831600 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:27:23.831623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:27:23.831632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:27:23.831641 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831650 | orchestrator | 2025-02-10 09:27:23.831659 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-02-10 09:27:23.831667 | orchestrator | Monday 10 February 2025 09:23:34 +0000 (0:00:00.941) 0:04:09.332 ******* 2025-02-10 09:27:23.831680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-10 09:27:23.831692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-10 09:27:23.831701 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-10 09:27:23.831733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-10 09:27:23.831742 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-10 09:27:23.831760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-10 09:27:23.831769 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831778 | orchestrator | 2025-02-10 09:27:23.831787 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-02-10 09:27:23.831795 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:01.602) 0:04:10.935 ******* 2025-02-10 09:27:23.831844 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831854 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831863 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831872 | orchestrator | 2025-02-10 09:27:23.831881 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-02-10 09:27:23.831890 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.380) 0:04:11.315 ******* 2025-02-10 09:27:23.831898 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831907 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831916 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831924 | orchestrator | 2025-02-10 09:27:23.831932 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-02-10 09:27:23.831940 | orchestrator | Monday 10 February 2025 09:23:38 +0000 (0:00:01.674) 0:04:12.990 ******* 2025-02-10 09:27:23.831948 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.831956 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.831964 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.831972 | orchestrator | 2025-02-10 09:27:23.831980 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-02-10 09:27:23.831988 | orchestrator | Monday 10 February 2025 09:23:38 +0000 (0:00:00.609) 0:04:13.600 ******* 2025-02-10 09:27:23.831996 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.832004 | orchestrator | 2025-02-10 09:27:23.832012 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-02-10 09:27:23.832024 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:01.760) 0:04:15.360 ******* 2025-02-10 09:27:23.832032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:27:23.832041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:27:23.832075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:27:23.832092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832101 | orchestrator | 2025-02-10 09:27:23.832109 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-02-10 09:27:23.832117 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:05.496) 0:04:20.857 ******* 2025-02-10 09:27:23.832136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:27:23.832146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832159 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:27:23.832177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832185 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:27:23.832213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832227 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832235 | orchestrator | 2025-02-10 09:27:23.832244 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-02-10 09:27:23.832252 | orchestrator | Monday 10 February 2025 09:23:46 +0000 (0:00:00.898) 0:04:21.755 ******* 2025-02-10 09:27:23.832260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:27:23.832269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:27:23.832277 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:27:23.832294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:27:23.832302 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:27:23.832318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-10 09:27:23.832326 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832334 | orchestrator | 2025-02-10 09:27:23.832342 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-02-10 09:27:23.832350 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:01.510) 0:04:23.266 ******* 2025-02-10 09:27:23.832359 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832367 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832375 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832383 | orchestrator | 2025-02-10 09:27:23.832391 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-02-10 09:27:23.832399 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:00.550) 0:04:23.816 ******* 2025-02-10 09:27:23.832407 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832415 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832423 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832431 | orchestrator | 2025-02-10 09:27:23.832439 | orchestrator | TASK [include_role : manila] *************************************************** 2025-02-10 09:27:23.832447 | orchestrator | Monday 10 February 2025 09:23:50 +0000 (0:00:01.470) 0:04:25.286 ******* 2025-02-10 09:27:23.832455 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.832463 | orchestrator | 2025-02-10 09:27:23.832471 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-02-10 09:27:23.832479 | orchestrator | Monday 10 February 2025 09:23:51 +0000 (0:00:01.570) 0:04:26.856 ******* 2025-02-10 09:27:23.832487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-10 09:27:23.832511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-10 09:27:23.832546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-10 09:27:23.832597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832623 | orchestrator | 2025-02-10 09:27:23.832631 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-02-10 09:27:23.832640 | orchestrator | Monday 10 February 2025 09:23:56 +0000 (0:00:04.679) 0:04:31.536 ******* 2025-02-10 09:27:23.832648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-10 09:27:23.832672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832699 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-10 09:27:23.832716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832757 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-10 09:27:23.832775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.832816 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832825 | orchestrator | 2025-02-10 09:27:23.832833 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-02-10 09:27:23.832841 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:01.210) 0:04:32.747 ******* 2025-02-10 09:27:23.832850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:27:23.832858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:27:23.832866 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:27:23.832883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:27:23.832891 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:27:23.832919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-10 09:27:23.832928 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832936 | orchestrator | 2025-02-10 09:27:23.832944 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-02-10 09:27:23.832952 | orchestrator | Monday 10 February 2025 09:23:59 +0000 (0:00:01.286) 0:04:34.033 ******* 2025-02-10 09:27:23.832961 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.832968 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.832977 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.832985 | orchestrator | 2025-02-10 09:27:23.832993 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-02-10 09:27:23.833002 | orchestrator | Monday 10 February 2025 09:23:59 +0000 (0:00:00.554) 0:04:34.588 ******* 2025-02-10 09:27:23.833010 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833018 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833026 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833034 | orchestrator | 2025-02-10 09:27:23.833042 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-02-10 09:27:23.833050 | orchestrator | Monday 10 February 2025 09:24:01 +0000 (0:00:01.576) 0:04:36.164 ******* 2025-02-10 09:27:23.833058 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.833066 | orchestrator | 2025-02-10 09:27:23.833074 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-02-10 09:27:23.833082 | orchestrator | Monday 10 February 2025 09:24:02 +0000 (0:00:01.552) 0:04:37.717 ******* 2025-02-10 09:27:23.833090 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:27:23.833098 | orchestrator | 2025-02-10 09:27:23.833106 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-02-10 09:27:23.833115 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:03.548) 0:04:41.266 ******* 2025-02-10 09:27:23.833130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:27:23.833139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:27:23.833209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:27:23.833230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:27:23.833240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:27:23.833271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:27:23.833281 | orchestrator | 2025-02-10 09:27:23.833289 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-02-10 09:27:23.833297 | orchestrator | Monday 10 February 2025 09:24:11 +0000 (0:00:04.711) 0:04:45.977 ******* 2025-02-10 09:27:23.833306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-10 09:27:23.833321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:27:23.833329 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-10 09:27:23.833369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:27:23.833383 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-10 09:27:23.833407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-10 09:27:23.833415 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833423 | orchestrator | 2025-02-10 09:27:23.833431 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-02-10 09:27:23.833440 | orchestrator | Monday 10 February 2025 09:24:14 +0000 (0:00:03.405) 0:04:49.382 ******* 2025-02-10 09:27:23.833459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:27:23.833470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:27:23.833484 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:27:23.833501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:27:23.833510 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:27:23.833527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-10 09:27:23.833535 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833544 | orchestrator | 2025-02-10 09:27:23.833552 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-02-10 09:27:23.833560 | orchestrator | Monday 10 February 2025 09:24:18 +0000 (0:00:04.191) 0:04:53.574 ******* 2025-02-10 09:27:23.833568 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833576 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833584 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833592 | orchestrator | 2025-02-10 09:27:23.833600 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-02-10 09:27:23.833608 | orchestrator | Monday 10 February 2025 09:24:19 +0000 (0:00:00.360) 0:04:53.934 ******* 2025-02-10 09:27:23.833616 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833624 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833632 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833640 | orchestrator | 2025-02-10 09:27:23.833658 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-02-10 09:27:23.833667 | orchestrator | Monday 10 February 2025 09:24:20 +0000 (0:00:01.621) 0:04:55.555 ******* 2025-02-10 09:27:23.833680 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833689 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833697 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833705 | orchestrator | 2025-02-10 09:27:23.833713 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-02-10 09:27:23.833721 | orchestrator | Monday 10 February 2025 09:24:21 +0000 (0:00:00.551) 0:04:56.107 ******* 2025-02-10 09:27:23.833729 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.833737 | orchestrator | 2025-02-10 09:27:23.833745 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-02-10 09:27:23.833753 | orchestrator | Monday 10 February 2025 09:24:22 +0000 (0:00:01.710) 0:04:57.818 ******* 2025-02-10 09:27:23.833762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-10 09:27:23.833771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-10 09:27:23.833785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-10 09:27:23.833794 | orchestrator | 2025-02-10 09:27:23.833816 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-02-10 09:27:23.833825 | orchestrator | Monday 10 February 2025 09:24:24 +0000 (0:00:01.739) 0:04:59.558 ******* 2025-02-10 09:27:23.833834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-10 09:27:23.833865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-10 09:27:23.833875 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833883 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-10 09:27:23.833901 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833909 | orchestrator | 2025-02-10 09:27:23.833917 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-02-10 09:27:23.833925 | orchestrator | Monday 10 February 2025 09:24:25 +0000 (0:00:00.500) 0:05:00.059 ******* 2025-02-10 09:27:23.833934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-10 09:27:23.833942 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.833951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-10 09:27:23.833960 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.833968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-10 09:27:23.833976 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.833985 | orchestrator | 2025-02-10 09:27:23.833993 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-02-10 09:27:23.834001 | orchestrator | Monday 10 February 2025 09:24:26 +0000 (0:00:01.148) 0:05:01.207 ******* 2025-02-10 09:27:23.834009 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.834037 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.834047 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.834055 | orchestrator | 2025-02-10 09:27:23.834063 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-02-10 09:27:23.834071 | orchestrator | Monday 10 February 2025 09:24:26 +0000 (0:00:00.549) 0:05:01.756 ******* 2025-02-10 09:27:23.834079 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.834092 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.834100 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.834108 | orchestrator | 2025-02-10 09:27:23.834116 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-02-10 09:27:23.834124 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:01.364) 0:05:03.120 ******* 2025-02-10 09:27:23.834132 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.834140 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.834148 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.834156 | orchestrator | 2025-02-10 09:27:23.834165 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-02-10 09:27:23.834173 | orchestrator | Monday 10 February 2025 09:24:28 +0000 (0:00:00.556) 0:05:03.677 ******* 2025-02-10 09:27:23.834181 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.834189 | orchestrator | 2025-02-10 09:27:23.834197 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-02-10 09:27:23.834205 | orchestrator | Monday 10 February 2025 09:24:30 +0000 (0:00:01.751) 0:05:05.429 ******* 2025-02-10 09:27:23.834225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:27:23.834234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:27:23.834293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:27:23.834321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:27:23.834403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.834417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.834458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.834532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.834541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.834563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.834572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.834624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.834642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:27:23.834652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:27:23.834711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.834777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.834799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.834857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.834866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834884 | orchestrator | 2025-02-10 09:27:23.834893 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-02-10 09:27:23.834901 | orchestrator | Monday 10 February 2025 09:24:36 +0000 (0:00:05.590) 0:05:11.019 ******* 2025-02-10 09:27:23.834910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:27:23.834918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:27:23.834969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.834977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.834991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.835033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.835051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:27:23.835095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.835109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.835133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835170 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.835183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:27:23.835192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:27:23.835200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:27:23.835297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.835316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.835362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.835428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.835438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.835465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.835493 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.835502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:27:23.835510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:27:23.835533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:27:23.835542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.835554 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.835563 | orchestrator | 2025-02-10 09:27:23.835581 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-02-10 09:27:23.835591 | orchestrator | Monday 10 February 2025 09:24:38 +0000 (0:00:02.006) 0:05:13.026 ******* 2025-02-10 09:27:23.835599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:27:23.835608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:27:23.835616 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.835624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:27:23.835633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:27:23.835641 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.835649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:27:23.835657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-10 09:27:23.835666 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.835674 | orchestrator | 2025-02-10 09:27:23.835682 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-02-10 09:27:23.835690 | orchestrator | Monday 10 February 2025 09:24:40 +0000 (0:00:02.179) 0:05:15.205 ******* 2025-02-10 09:27:23.835698 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.835706 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.835714 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.835722 | orchestrator | 2025-02-10 09:27:23.835733 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-02-10 09:27:23.835742 | orchestrator | Monday 10 February 2025 09:24:40 +0000 (0:00:00.599) 0:05:15.805 ******* 2025-02-10 09:27:23.835749 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.835758 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.835766 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.835774 | orchestrator | 2025-02-10 09:27:23.835782 | orchestrator | TASK [include_role : placement] ************************************************ 2025-02-10 09:27:23.835789 | orchestrator | Monday 10 February 2025 09:24:42 +0000 (0:00:01.872) 0:05:17.678 ******* 2025-02-10 09:27:23.835797 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.835839 | orchestrator | 2025-02-10 09:27:23.835848 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-02-10 09:27:23.835856 | orchestrator | Monday 10 February 2025 09:24:44 +0000 (0:00:01.717) 0:05:19.395 ******* 2025-02-10 09:27:23.835864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.835889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.835898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.835907 | orchestrator | 2025-02-10 09:27:23.835915 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-02-10 09:27:23.835923 | orchestrator | Monday 10 February 2025 09:24:49 +0000 (0:00:04.502) 0:05:23.898 ******* 2025-02-10 09:27:23.835938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.835947 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.835955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.835969 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.835977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.835986 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.835993 | orchestrator | 2025-02-10 09:27:23.836010 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-02-10 09:27:23.836019 | orchestrator | Monday 10 February 2025 09:24:49 +0000 (0:00:00.912) 0:05:24.810 ******* 2025-02-10 09:27:23.836027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836045 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836066 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836088 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836095 | orchestrator | 2025-02-10 09:27:23.836102 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-02-10 09:27:23.836109 | orchestrator | Monday 10 February 2025 09:24:51 +0000 (0:00:01.490) 0:05:26.300 ******* 2025-02-10 09:27:23.836116 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836127 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836134 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836141 | orchestrator | 2025-02-10 09:27:23.836148 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-02-10 09:27:23.836155 | orchestrator | Monday 10 February 2025 09:24:51 +0000 (0:00:00.414) 0:05:26.714 ******* 2025-02-10 09:27:23.836162 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836172 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836179 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836187 | orchestrator | 2025-02-10 09:27:23.836193 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-02-10 09:27:23.836200 | orchestrator | Monday 10 February 2025 09:24:53 +0000 (0:00:01.968) 0:05:28.683 ******* 2025-02-10 09:27:23.836207 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.836214 | orchestrator | 2025-02-10 09:27:23.836221 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-02-10 09:27:23.836228 | orchestrator | Monday 10 February 2025 09:24:55 +0000 (0:00:01.899) 0:05:30.582 ******* 2025-02-10 09:27:23.836235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.836259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.836268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.836329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836348 | orchestrator | 2025-02-10 09:27:23.836355 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-02-10 09:27:23.836363 | orchestrator | Monday 10 February 2025 09:25:02 +0000 (0:00:06.879) 0:05:37.462 ******* 2025-02-10 09:27:23.836370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.836394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.836403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836438 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836445 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.836477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.836499 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836506 | orchestrator | 2025-02-10 09:27:23.836513 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-02-10 09:27:23.836521 | orchestrator | Monday 10 February 2025 09:25:03 +0000 (0:00:01.005) 0:05:38.467 ******* 2025-02-10 09:27:23.836528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836557 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836592 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-10 09:27:23.836628 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836635 | orchestrator | 2025-02-10 09:27:23.836642 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-02-10 09:27:23.836660 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:01.483) 0:05:39.951 ******* 2025-02-10 09:27:23.836672 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836679 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836687 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836693 | orchestrator | 2025-02-10 09:27:23.836701 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-02-10 09:27:23.836708 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:00.590) 0:05:40.542 ******* 2025-02-10 09:27:23.836715 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836722 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836728 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836735 | orchestrator | 2025-02-10 09:27:23.836742 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-02-10 09:27:23.836750 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:01.694) 0:05:42.236 ******* 2025-02-10 09:27:23.836756 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.836763 | orchestrator | 2025-02-10 09:27:23.836770 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-02-10 09:27:23.836777 | orchestrator | Monday 10 February 2025 09:25:08 +0000 (0:00:01.552) 0:05:43.789 ******* 2025-02-10 09:27:23.836784 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-02-10 09:27:23.836792 | orchestrator | 2025-02-10 09:27:23.836799 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-02-10 09:27:23.836822 | orchestrator | Monday 10 February 2025 09:25:10 +0000 (0:00:01.704) 0:05:45.494 ******* 2025-02-10 09:27:23.836830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-10 09:27:23.836838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-10 09:27:23.836845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-10 09:27:23.836853 | orchestrator | 2025-02-10 09:27:23.836860 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-02-10 09:27:23.836867 | orchestrator | Monday 10 February 2025 09:25:16 +0000 (0:00:05.688) 0:05:51.182 ******* 2025-02-10 09:27:23.836874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.836886 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.836917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.836925 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.836933 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.836940 | orchestrator | 2025-02-10 09:27:23.836947 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-02-10 09:27:23.836954 | orchestrator | Monday 10 February 2025 09:25:17 +0000 (0:00:01.503) 0:05:52.686 ******* 2025-02-10 09:27:23.836961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:27:23.836968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:27:23.836976 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.836983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:27:23.836993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:27:23.837000 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:27:23.837014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-10 09:27:23.837022 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837029 | orchestrator | 2025-02-10 09:27:23.837036 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-10 09:27:23.837043 | orchestrator | Monday 10 February 2025 09:25:20 +0000 (0:00:02.564) 0:05:55.251 ******* 2025-02-10 09:27:23.837050 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837057 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837064 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837071 | orchestrator | 2025-02-10 09:27:23.837078 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-10 09:27:23.837085 | orchestrator | Monday 10 February 2025 09:25:21 +0000 (0:00:00.652) 0:05:55.903 ******* 2025-02-10 09:27:23.837096 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837103 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837110 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837117 | orchestrator | 2025-02-10 09:27:23.837125 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-02-10 09:27:23.837132 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:01.200) 0:05:57.104 ******* 2025-02-10 09:27:23.837139 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-02-10 09:27:23.837146 | orchestrator | 2025-02-10 09:27:23.837153 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-02-10 09:27:23.837160 | orchestrator | Monday 10 February 2025 09:25:23 +0000 (0:00:01.511) 0:05:58.615 ******* 2025-02-10 09:27:23.837168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.837175 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.837200 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.837217 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837224 | orchestrator | 2025-02-10 09:27:23.837231 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-02-10 09:27:23.837238 | orchestrator | Monday 10 February 2025 09:25:26 +0000 (0:00:02.298) 0:06:00.914 ******* 2025-02-10 09:27:23.837245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.837252 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.837270 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-10 09:27:23.837285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837293 | orchestrator | 2025-02-10 09:27:23.837300 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-02-10 09:27:23.837307 | orchestrator | Monday 10 February 2025 09:25:28 +0000 (0:00:02.239) 0:06:03.154 ******* 2025-02-10 09:27:23.837314 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837321 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837328 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837335 | orchestrator | 2025-02-10 09:27:23.837342 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-10 09:27:23.837349 | orchestrator | Monday 10 February 2025 09:25:30 +0000 (0:00:02.021) 0:06:05.175 ******* 2025-02-10 09:27:23.837356 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837363 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837370 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837377 | orchestrator | 2025-02-10 09:27:23.837384 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-10 09:27:23.837391 | orchestrator | Monday 10 February 2025 09:25:30 +0000 (0:00:00.618) 0:06:05.794 ******* 2025-02-10 09:27:23.837398 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837405 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837412 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837419 | orchestrator | 2025-02-10 09:27:23.837426 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-02-10 09:27:23.837433 | orchestrator | Monday 10 February 2025 09:25:32 +0000 (0:00:01.204) 0:06:06.998 ******* 2025-02-10 09:27:23.837440 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-serialproxy) 2025-02-10 09:27:23.837447 | orchestrator | 2025-02-10 09:27:23.837454 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-02-10 09:27:23.837461 | orchestrator | Monday 10 February 2025 09:25:33 +0000 (0:00:01.446) 0:06:08.445 ******* 2025-02-10 09:27:23.837484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:27:23.837493 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:27:23.837512 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:27:23.837527 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837534 | orchestrator | 2025-02-10 09:27:23.837541 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-02-10 09:27:23.837548 | orchestrator | Monday 10 February 2025 09:25:35 +0000 (0:00:01.897) 0:06:10.343 ******* 2025-02-10 09:27:23.837556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:27:23.837563 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:27:23.837578 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-10 09:27:23.837592 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837599 | orchestrator | 2025-02-10 09:27:23.837607 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-02-10 09:27:23.837614 | orchestrator | Monday 10 February 2025 09:25:37 +0000 (0:00:01.859) 0:06:12.202 ******* 2025-02-10 09:27:23.837621 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837628 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837635 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837642 | orchestrator | 2025-02-10 09:27:23.837649 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-10 09:27:23.837656 | orchestrator | Monday 10 February 2025 09:25:39 +0000 (0:00:02.212) 0:06:14.415 ******* 2025-02-10 09:27:23.837677 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837685 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837696 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837704 | orchestrator | 2025-02-10 09:27:23.837711 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-10 09:27:23.837718 | orchestrator | Monday 10 February 2025 09:25:39 +0000 (0:00:00.335) 0:06:14.750 ******* 2025-02-10 09:27:23.837725 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.837739 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.837747 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.837755 | orchestrator | 2025-02-10 09:27:23.837762 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-02-10 09:27:23.837769 | orchestrator | Monday 10 February 2025 09:25:41 +0000 (0:00:01.701) 0:06:16.452 ******* 2025-02-10 09:27:23.837779 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.837786 | orchestrator | 2025-02-10 09:27:23.837794 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-02-10 09:27:23.837826 | orchestrator | Monday 10 February 2025 09:25:43 +0000 (0:00:02.016) 0:06:18.469 ******* 2025-02-10 09:27:23.837835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.837843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:27:23.837857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.837864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.837872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.837897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.837905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:27:23.837919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.837932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.837944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.837967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.837992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:27:23.838004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.838083 | orchestrator | 2025-02-10 09:27:23.838092 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-02-10 09:27:23.838102 | orchestrator | Monday 10 February 2025 09:25:48 +0000 (0:00:04.695) 0:06:23.164 ******* 2025-02-10 09:27:23.838110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.838134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:27:23.838160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.838193 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.838213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:27:23.838220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.838259 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.838273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:27:23.838284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:27:23.838310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:27:23.838317 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838324 | orchestrator | 2025-02-10 09:27:23.838330 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-02-10 09:27:23.838337 | orchestrator | Monday 10 February 2025 09:25:49 +0000 (0:00:01.388) 0:06:24.552 ******* 2025-02-10 09:27:23.838343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:27:23.838350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:27:23.838356 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:27:23.838369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:27:23.838375 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:27:23.838394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-10 09:27:23.838400 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838406 | orchestrator | 2025-02-10 09:27:23.838413 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-02-10 09:27:23.838419 | orchestrator | Monday 10 February 2025 09:25:51 +0000 (0:00:01.471) 0:06:26.024 ******* 2025-02-10 09:27:23.838425 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838431 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838437 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838444 | orchestrator | 2025-02-10 09:27:23.838450 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-02-10 09:27:23.838456 | orchestrator | Monday 10 February 2025 09:25:51 +0000 (0:00:00.660) 0:06:26.684 ******* 2025-02-10 09:27:23.838462 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838469 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838481 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838487 | orchestrator | 2025-02-10 09:27:23.838493 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-02-10 09:27:23.838500 | orchestrator | Monday 10 February 2025 09:25:53 +0000 (0:00:01.769) 0:06:28.453 ******* 2025-02-10 09:27:23.838506 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.838512 | orchestrator | 2025-02-10 09:27:23.838518 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-02-10 09:27:23.838525 | orchestrator | Monday 10 February 2025 09:25:55 +0000 (0:00:02.251) 0:06:30.704 ******* 2025-02-10 09:27:23.838531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:27:23.838549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:27:23.838561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:27:23.838568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:27:23.838579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:27:23.838596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:27:23.838608 | orchestrator | 2025-02-10 09:27:23.838615 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-02-10 09:27:23.838621 | orchestrator | Monday 10 February 2025 09:26:02 +0000 (0:00:07.124) 0:06:37.829 ******* 2025-02-10 09:27:23.838628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:27:23.838634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:27:23.838645 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:27:23.838668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:27:23.838680 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:27:23.838693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:27:23.838706 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838713 | orchestrator | 2025-02-10 09:27:23.838719 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-02-10 09:27:23.838725 | orchestrator | Monday 10 February 2025 09:26:04 +0000 (0:00:01.063) 0:06:38.893 ******* 2025-02-10 09:27:23.838732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-10 09:27:23.838738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:27:23.838744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:27:23.838751 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-10 09:27:23.838765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:27:23.838771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:27:23.838777 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-10 09:27:23.838811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:27:23.838818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-10 09:27:23.838825 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838831 | orchestrator | 2025-02-10 09:27:23.838837 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-02-10 09:27:23.838846 | orchestrator | Monday 10 February 2025 09:26:05 +0000 (0:00:01.614) 0:06:40.508 ******* 2025-02-10 09:27:23.838853 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838860 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838866 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838873 | orchestrator | 2025-02-10 09:27:23.838879 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-02-10 09:27:23.838885 | orchestrator | Monday 10 February 2025 09:26:06 +0000 (0:00:00.638) 0:06:41.146 ******* 2025-02-10 09:27:23.838895 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.838902 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.838908 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.838914 | orchestrator | 2025-02-10 09:27:23.838920 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-02-10 09:27:23.838927 | orchestrator | Monday 10 February 2025 09:26:08 +0000 (0:00:01.851) 0:06:42.997 ******* 2025-02-10 09:27:23.838933 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.838939 | orchestrator | 2025-02-10 09:27:23.838945 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-02-10 09:27:23.838952 | orchestrator | Monday 10 February 2025 09:26:10 +0000 (0:00:02.060) 0:06:45.057 ******* 2025-02-10 09:27:23.838959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:27:23.838966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:27:23.838972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.838979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.838995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:27:23.839018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:27:23.839025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:27:23.839061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:27:23.839072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:27:23.839105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:27:23.839120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:27:23.839163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:27:23.839176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:27:23.839219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:27:23.839225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839264 | orchestrator | 2025-02-10 09:27:23.839271 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-02-10 09:27:23.839277 | orchestrator | Monday 10 February 2025 09:26:15 +0000 (0:00:05.738) 0:06:50.796 ******* 2025-02-10 09:27:23.839284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:27:23.839290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:27:23.839297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:27:23.839334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:27:23.839341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:27:23.839378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839385 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.839392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:27:23.839398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:27:23.839435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:27:23.839442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839480 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.839491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:27:23.839501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:27:23.839508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:27:23.839542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:27:23.839552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:27:23.839572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:27:23.839578 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.839584 | orchestrator | 2025-02-10 09:27:23.839591 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-02-10 09:27:23.839597 | orchestrator | Monday 10 February 2025 09:26:17 +0000 (0:00:01.465) 0:06:52.262 ******* 2025-02-10 09:27:23.839604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-10 09:27:23.839610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-10 09:27:23.839617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:27:23.839628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:27:23.839634 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.839640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-10 09:27:23.839646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-10 09:27:23.839653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:27:23.839660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:27:23.839666 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.839675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-10 09:27:23.839682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-10 09:27:23.839688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:27:23.839695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-10 09:27:23.839701 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.839707 | orchestrator | 2025-02-10 09:27:23.839714 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-02-10 09:27:23.839720 | orchestrator | Monday 10 February 2025 09:26:19 +0000 (0:00:01.876) 0:06:54.138 ******* 2025-02-10 09:27:23.839726 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.839732 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.839739 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.839745 | orchestrator | 2025-02-10 09:27:23.839751 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-02-10 09:27:23.839757 | orchestrator | Monday 10 February 2025 09:26:19 +0000 (0:00:00.594) 0:06:54.733 ******* 2025-02-10 09:27:23.839763 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.839769 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.839776 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.839782 | orchestrator | 2025-02-10 09:27:23.839788 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-02-10 09:27:23.839794 | orchestrator | Monday 10 February 2025 09:26:21 +0000 (0:00:01.931) 0:06:56.664 ******* 2025-02-10 09:27:23.839839 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.839846 | orchestrator | 2025-02-10 09:27:23.839853 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-02-10 09:27:23.839859 | orchestrator | Monday 10 February 2025 09:26:23 +0000 (0:00:01.841) 0:06:58.506 ******* 2025-02-10 09:27:23.839866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:27:23.839873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:27:23.839889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-10 09:27:23.839896 | orchestrator | 2025-02-10 09:27:23.839902 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-02-10 09:27:23.839908 | orchestrator | Monday 10 February 2025 09:26:26 +0000 (0:00:03.277) 0:07:01.783 ******* 2025-02-10 09:27:23.839915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-10 09:27:23.839926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-10 09:27:23.839938 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.839945 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.839951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-10 09:27:23.839957 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.839964 | orchestrator | 2025-02-10 09:27:23.840035 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-02-10 09:27:23.840043 | orchestrator | Monday 10 February 2025 09:26:27 +0000 (0:00:00.754) 0:07:02.538 ******* 2025-02-10 09:27:23.840049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-10 09:27:23.840056 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-10 09:27:23.840069 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-10 09:27:23.840081 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840088 | orchestrator | 2025-02-10 09:27:23.840094 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-02-10 09:27:23.840100 | orchestrator | Monday 10 February 2025 09:26:28 +0000 (0:00:00.939) 0:07:03.477 ******* 2025-02-10 09:27:23.840110 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840117 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840123 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840129 | orchestrator | 2025-02-10 09:27:23.840136 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-02-10 09:27:23.840142 | orchestrator | Monday 10 February 2025 09:26:29 +0000 (0:00:00.636) 0:07:04.114 ******* 2025-02-10 09:27:23.840148 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840158 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840164 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840170 | orchestrator | 2025-02-10 09:27:23.840176 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-02-10 09:27:23.840183 | orchestrator | Monday 10 February 2025 09:26:31 +0000 (0:00:01.911) 0:07:06.026 ******* 2025-02-10 09:27:23.840189 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:27:23.840195 | orchestrator | 2025-02-10 09:27:23.840201 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-02-10 09:27:23.840207 | orchestrator | Monday 10 February 2025 09:26:33 +0000 (0:00:02.158) 0:07:08.185 ******* 2025-02-10 09:27:23.840214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.840221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.840231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.840243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.840250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.840256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-10 09:27:23.840262 | orchestrator | 2025-02-10 09:27:23.840269 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-02-10 09:27:23.840275 | orchestrator | Monday 10 February 2025 09:26:41 +0000 (0:00:08.667) 0:07:16.853 ******* 2025-02-10 09:27:23.840284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.840295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.840301 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.840314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.840321 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.840340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-10 09:27:23.840347 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840353 | orchestrator | 2025-02-10 09:27:23.840359 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-02-10 09:27:23.840365 | orchestrator | Monday 10 February 2025 09:26:43 +0000 (0:00:01.047) 0:07:17.900 ******* 2025-02-10 09:27:23.840371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840407 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840425 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-10 09:27:23.840459 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840465 | orchestrator | 2025-02-10 09:27:23.840471 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-02-10 09:27:23.840477 | orchestrator | Monday 10 February 2025 09:26:45 +0000 (0:00:02.040) 0:07:19.940 ******* 2025-02-10 09:27:23.840485 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840492 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840498 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840503 | orchestrator | 2025-02-10 09:27:23.840509 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-02-10 09:27:23.840515 | orchestrator | Monday 10 February 2025 09:26:45 +0000 (0:00:00.727) 0:07:20.668 ******* 2025-02-10 09:27:23.840521 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840527 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840533 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840539 | orchestrator | 2025-02-10 09:27:23.840545 | orchestrator | TASK [include_role : swift] **************************************************** 2025-02-10 09:27:23.840550 | orchestrator | Monday 10 February 2025 09:26:47 +0000 (0:00:01.872) 0:07:22.541 ******* 2025-02-10 09:27:23.840556 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840562 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840568 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840574 | orchestrator | 2025-02-10 09:27:23.840583 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-02-10 09:27:23.840589 | orchestrator | Monday 10 February 2025 09:26:48 +0000 (0:00:00.358) 0:07:22.899 ******* 2025-02-10 09:27:23.840595 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840601 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840607 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840613 | orchestrator | 2025-02-10 09:27:23.840619 | orchestrator | TASK [include_role : trove] **************************************************** 2025-02-10 09:27:23.840625 | orchestrator | Monday 10 February 2025 09:26:48 +0000 (0:00:00.639) 0:07:23.539 ******* 2025-02-10 09:27:23.840631 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840637 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840643 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840649 | orchestrator | 2025-02-10 09:27:23.840655 | orchestrator | TASK [include_role : venus] **************************************************** 2025-02-10 09:27:23.840661 | orchestrator | Monday 10 February 2025 09:26:49 +0000 (0:00:00.648) 0:07:24.188 ******* 2025-02-10 09:27:23.840667 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840673 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840679 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840685 | orchestrator | 2025-02-10 09:27:23.840691 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-02-10 09:27:23.840697 | orchestrator | Monday 10 February 2025 09:26:49 +0000 (0:00:00.341) 0:07:24.530 ******* 2025-02-10 09:27:23.840703 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840708 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840714 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840720 | orchestrator | 2025-02-10 09:27:23.840726 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-02-10 09:27:23.840732 | orchestrator | Monday 10 February 2025 09:26:50 +0000 (0:00:00.744) 0:07:25.274 ******* 2025-02-10 09:27:23.840738 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.840744 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.840750 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.840755 | orchestrator | 2025-02-10 09:27:23.840765 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-02-10 09:27:23.840771 | orchestrator | Monday 10 February 2025 09:26:51 +0000 (0:00:01.146) 0:07:26.421 ******* 2025-02-10 09:27:23.840777 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.840783 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.840788 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.840794 | orchestrator | 2025-02-10 09:27:23.840811 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-02-10 09:27:23.840817 | orchestrator | Monday 10 February 2025 09:26:52 +0000 (0:00:00.752) 0:07:27.173 ******* 2025-02-10 09:27:23.840823 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.840829 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.840835 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.840841 | orchestrator | 2025-02-10 09:27:23.840847 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-02-10 09:27:23.840853 | orchestrator | Monday 10 February 2025 09:26:52 +0000 (0:00:00.696) 0:07:27.870 ******* 2025-02-10 09:27:23.840859 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.840865 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.840870 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.840876 | orchestrator | 2025-02-10 09:27:23.840882 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-02-10 09:27:23.840888 | orchestrator | Monday 10 February 2025 09:26:54 +0000 (0:00:01.477) 0:07:29.347 ******* 2025-02-10 09:27:23.840894 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.840900 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.840906 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.840912 | orchestrator | 2025-02-10 09:27:23.840917 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-02-10 09:27:23.840923 | orchestrator | Monday 10 February 2025 09:26:55 +0000 (0:00:01.395) 0:07:30.743 ******* 2025-02-10 09:27:23.840929 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.840935 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.840941 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.840947 | orchestrator | 2025-02-10 09:27:23.840953 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-02-10 09:27:23.840959 | orchestrator | Monday 10 February 2025 09:26:56 +0000 (0:00:01.066) 0:07:31.809 ******* 2025-02-10 09:27:23.840965 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:27:23.840971 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:27:23.840977 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:27:23.840982 | orchestrator | 2025-02-10 09:27:23.840989 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-02-10 09:27:23.840995 | orchestrator | Monday 10 February 2025 09:27:02 +0000 (0:00:05.156) 0:07:36.965 ******* 2025-02-10 09:27:23.841000 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.841009 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.841015 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.841021 | orchestrator | 2025-02-10 09:27:23.841027 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-02-10 09:27:23.841033 | orchestrator | Monday 10 February 2025 09:27:05 +0000 (0:00:03.250) 0:07:40.216 ******* 2025-02-10 09:27:23.841039 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841045 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841054 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841060 | orchestrator | 2025-02-10 09:27:23.841066 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-02-10 09:27:23.841072 | orchestrator | Monday 10 February 2025 09:27:06 +0000 (0:00:01.290) 0:07:41.506 ******* 2025-02-10 09:27:23.841078 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:27:23.841084 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:27:23.841089 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:27:23.841095 | orchestrator | 2025-02-10 09:27:23.841101 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-02-10 09:27:23.841107 | orchestrator | Monday 10 February 2025 09:27:16 +0000 (0:00:09.516) 0:07:51.023 ******* 2025-02-10 09:27:23.841118 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841124 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841130 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841136 | orchestrator | 2025-02-10 09:27:23.841142 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-02-10 09:27:23.841148 | orchestrator | Monday 10 February 2025 09:27:16 +0000 (0:00:00.717) 0:07:51.740 ******* 2025-02-10 09:27:23.841154 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841160 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841166 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841171 | orchestrator | 2025-02-10 09:27:23.841180 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-02-10 09:27:23.841186 | orchestrator | Monday 10 February 2025 09:27:17 +0000 (0:00:00.671) 0:07:52.412 ******* 2025-02-10 09:27:23.841192 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841198 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841204 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841210 | orchestrator | 2025-02-10 09:27:23.841216 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-02-10 09:27:23.841221 | orchestrator | Monday 10 February 2025 09:27:17 +0000 (0:00:00.363) 0:07:52.776 ******* 2025-02-10 09:27:23.841227 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841233 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841239 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841245 | orchestrator | 2025-02-10 09:27:23.841251 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-02-10 09:27:23.841256 | orchestrator | Monday 10 February 2025 09:27:18 +0000 (0:00:00.671) 0:07:53.447 ******* 2025-02-10 09:27:23.841262 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841268 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841274 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841280 | orchestrator | 2025-02-10 09:27:23.841286 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-02-10 09:27:23.841292 | orchestrator | Monday 10 February 2025 09:27:19 +0000 (0:00:00.694) 0:07:54.141 ******* 2025-02-10 09:27:23.841297 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841303 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841309 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841315 | orchestrator | 2025-02-10 09:27:23.841321 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-02-10 09:27:23.841326 | orchestrator | Monday 10 February 2025 09:27:19 +0000 (0:00:00.718) 0:07:54.860 ******* 2025-02-10 09:27:23.841333 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:27:23.841338 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:27:23.841344 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:27:23.841350 | orchestrator | 2025-02-10 09:27:23.841356 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-02-10 09:27:23.841362 | orchestrator | Monday 10 February 2025 09:27:21 +0000 (0:00:01.169) 0:07:56.029 ******* 2025-02-10 09:27:23.841367 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:27:23.841374 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:27:23.841379 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:27:23.841386 | orchestrator | 2025-02-10 09:27:23.841391 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:27:23.841397 | orchestrator | testbed-node-0 : ok=85  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-10 09:27:23.841404 | orchestrator | testbed-node-1 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-10 09:27:23.841410 | orchestrator | testbed-node-2 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-10 09:27:23.841419 | orchestrator | 2025-02-10 09:27:23.841425 | orchestrator | 2025-02-10 09:27:23.841430 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:27:23.841436 | orchestrator | Monday 10 February 2025 09:27:22 +0000 (0:00:00.908) 0:07:56.937 ******* 2025-02-10 09:27:23.841442 | orchestrator | =============================================================================== 2025-02-10 09:27:23.841448 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend -- 10.19s 2025-02-10 09:27:23.841454 | orchestrator | haproxy-config : Copying over glance haproxy config -------------------- 10.12s 2025-02-10 09:27:23.841460 | orchestrator | haproxy-config : Copying over ironic haproxy config -------------------- 10.06s 2025-02-10 09:27:23.841466 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 9.58s 2025-02-10 09:27:23.841472 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.52s 2025-02-10 09:27:23.841478 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 8.84s 2025-02-10 09:27:23.841483 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.67s 2025-02-10 09:27:23.841489 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 8.08s 2025-02-10 09:27:23.841495 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.92s 2025-02-10 09:27:23.841504 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.12s 2025-02-10 09:27:26.861538 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.88s 2025-02-10 09:27:26.861717 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 6.20s 2025-02-10 09:27:26.861736 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.13s 2025-02-10 09:27:26.861750 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.80s 2025-02-10 09:27:26.861763 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.74s 2025-02-10 09:27:26.861778 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.69s 2025-02-10 09:27:26.861845 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.59s 2025-02-10 09:27:26.861873 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.50s 2025-02-10 09:27:26.861892 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.16s 2025-02-10 09:27:26.861904 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.15s 2025-02-10 09:27:26.861918 | orchestrator | 2025-02-10 09:27:23 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:26.861932 | orchestrator | 2025-02-10 09:27:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:26.861965 | orchestrator | 2025-02-10 09:27:26 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:26.866075 | orchestrator | 2025-02-10 09:27:26 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:26.866828 | orchestrator | 2025-02-10 09:27:26 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:26.867111 | orchestrator | 2025-02-10 09:27:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:29.923127 | orchestrator | 2025-02-10 09:27:29 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:29.924341 | orchestrator | 2025-02-10 09:27:29 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:29.924386 | orchestrator | 2025-02-10 09:27:29 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:32.970404 | orchestrator | 2025-02-10 09:27:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:32.970554 | orchestrator | 2025-02-10 09:27:32 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:32.970756 | orchestrator | 2025-02-10 09:27:32 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:32.971881 | orchestrator | 2025-02-10 09:27:32 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:36.019221 | orchestrator | 2025-02-10 09:27:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:36.019371 | orchestrator | 2025-02-10 09:27:36 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:39.061973 | orchestrator | 2025-02-10 09:27:36 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:39.062183 | orchestrator | 2025-02-10 09:27:36 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:39.062209 | orchestrator | 2025-02-10 09:27:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:39.062250 | orchestrator | 2025-02-10 09:27:39 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:42.120958 | orchestrator | 2025-02-10 09:27:39 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:42.121092 | orchestrator | 2025-02-10 09:27:39 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:42.121111 | orchestrator | 2025-02-10 09:27:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:42.121143 | orchestrator | 2025-02-10 09:27:42 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:42.121502 | orchestrator | 2025-02-10 09:27:42 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:42.121586 | orchestrator | 2025-02-10 09:27:42 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:45.170512 | orchestrator | 2025-02-10 09:27:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:45.170695 | orchestrator | 2025-02-10 09:27:45 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:45.172349 | orchestrator | 2025-02-10 09:27:45 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:45.172430 | orchestrator | 2025-02-10 09:27:45 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:48.210215 | orchestrator | 2025-02-10 09:27:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:48.210375 | orchestrator | 2025-02-10 09:27:48 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:51.251038 | orchestrator | 2025-02-10 09:27:48 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:51.251215 | orchestrator | 2025-02-10 09:27:48 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:51.251238 | orchestrator | 2025-02-10 09:27:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:51.251275 | orchestrator | 2025-02-10 09:27:51 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:54.293208 | orchestrator | 2025-02-10 09:27:51 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:54.293389 | orchestrator | 2025-02-10 09:27:51 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:54.293413 | orchestrator | 2025-02-10 09:27:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:54.293449 | orchestrator | 2025-02-10 09:27:54 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:54.293599 | orchestrator | 2025-02-10 09:27:54 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:54.293626 | orchestrator | 2025-02-10 09:27:54 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:27:57.347951 | orchestrator | 2025-02-10 09:27:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:27:57.348124 | orchestrator | 2025-02-10 09:27:57 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:27:57.350498 | orchestrator | 2025-02-10 09:27:57 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:27:57.352057 | orchestrator | 2025-02-10 09:27:57 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:00.420371 | orchestrator | 2025-02-10 09:27:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:00.420540 | orchestrator | 2025-02-10 09:28:00 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:00.422623 | orchestrator | 2025-02-10 09:28:00 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:00.422675 | orchestrator | 2025-02-10 09:28:00 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:03.473890 | orchestrator | 2025-02-10 09:28:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:03.474136 | orchestrator | 2025-02-10 09:28:03 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:03.477574 | orchestrator | 2025-02-10 09:28:03 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:03.477629 | orchestrator | 2025-02-10 09:28:03 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:06.526742 | orchestrator | 2025-02-10 09:28:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:06.526949 | orchestrator | 2025-02-10 09:28:06 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:06.527703 | orchestrator | 2025-02-10 09:28:06 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:06.527745 | orchestrator | 2025-02-10 09:28:06 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:09.577751 | orchestrator | 2025-02-10 09:28:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:09.577994 | orchestrator | 2025-02-10 09:28:09 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:09.580229 | orchestrator | 2025-02-10 09:28:09 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:09.580309 | orchestrator | 2025-02-10 09:28:09 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:12.619929 | orchestrator | 2025-02-10 09:28:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:12.620115 | orchestrator | 2025-02-10 09:28:12 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:12.620341 | orchestrator | 2025-02-10 09:28:12 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:12.620371 | orchestrator | 2025-02-10 09:28:12 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:15.664985 | orchestrator | 2025-02-10 09:28:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:15.665170 | orchestrator | 2025-02-10 09:28:15 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:15.665384 | orchestrator | 2025-02-10 09:28:15 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:15.666810 | orchestrator | 2025-02-10 09:28:15 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:18.710641 | orchestrator | 2025-02-10 09:28:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:18.710974 | orchestrator | 2025-02-10 09:28:18 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:18.711091 | orchestrator | 2025-02-10 09:28:18 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:18.712924 | orchestrator | 2025-02-10 09:28:18 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:21.762304 | orchestrator | 2025-02-10 09:28:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:21.762445 | orchestrator | 2025-02-10 09:28:21 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:21.762698 | orchestrator | 2025-02-10 09:28:21 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:21.763634 | orchestrator | 2025-02-10 09:28:21 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:24.825717 | orchestrator | 2025-02-10 09:28:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:24.825925 | orchestrator | 2025-02-10 09:28:24 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:27.880728 | orchestrator | 2025-02-10 09:28:24 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:27.880908 | orchestrator | 2025-02-10 09:28:24 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:27.880930 | orchestrator | 2025-02-10 09:28:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:27.880964 | orchestrator | 2025-02-10 09:28:27 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:27.881331 | orchestrator | 2025-02-10 09:28:27 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:27.881367 | orchestrator | 2025-02-10 09:28:27 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:27.883476 | orchestrator | 2025-02-10 09:28:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:30.922638 | orchestrator | 2025-02-10 09:28:30 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:30.928756 | orchestrator | 2025-02-10 09:28:30 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:30.928821 | orchestrator | 2025-02-10 09:28:30 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:33.965397 | orchestrator | 2025-02-10 09:28:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:33.965544 | orchestrator | 2025-02-10 09:28:33 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:37.021870 | orchestrator | 2025-02-10 09:28:33 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:37.021971 | orchestrator | 2025-02-10 09:28:33 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:37.021997 | orchestrator | 2025-02-10 09:28:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:37.022052 | orchestrator | 2025-02-10 09:28:37 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:37.022854 | orchestrator | 2025-02-10 09:28:37 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:37.023487 | orchestrator | 2025-02-10 09:28:37 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:40.071181 | orchestrator | 2025-02-10 09:28:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:40.071327 | orchestrator | 2025-02-10 09:28:40 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:40.071383 | orchestrator | 2025-02-10 09:28:40 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:40.072700 | orchestrator | 2025-02-10 09:28:40 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:40.072879 | orchestrator | 2025-02-10 09:28:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:43.125579 | orchestrator | 2025-02-10 09:28:43 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:43.125813 | orchestrator | 2025-02-10 09:28:43 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:43.125887 | orchestrator | 2025-02-10 09:28:43 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:43.129020 | orchestrator | 2025-02-10 09:28:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:46.176429 | orchestrator | 2025-02-10 09:28:46 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:46.179325 | orchestrator | 2025-02-10 09:28:46 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:46.180094 | orchestrator | 2025-02-10 09:28:46 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:49.221009 | orchestrator | 2025-02-10 09:28:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:49.221161 | orchestrator | 2025-02-10 09:28:49 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:49.221243 | orchestrator | 2025-02-10 09:28:49 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:49.222371 | orchestrator | 2025-02-10 09:28:49 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:52.258924 | orchestrator | 2025-02-10 09:28:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:52.259092 | orchestrator | 2025-02-10 09:28:52 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:55.300064 | orchestrator | 2025-02-10 09:28:52 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:55.300192 | orchestrator | 2025-02-10 09:28:52 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:55.300210 | orchestrator | 2025-02-10 09:28:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:55.300241 | orchestrator | 2025-02-10 09:28:55 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:55.300658 | orchestrator | 2025-02-10 09:28:55 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:55.300690 | orchestrator | 2025-02-10 09:28:55 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:28:58.348874 | orchestrator | 2025-02-10 09:28:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:28:58.349076 | orchestrator | 2025-02-10 09:28:58 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:28:58.351237 | orchestrator | 2025-02-10 09:28:58 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:28:58.354088 | orchestrator | 2025-02-10 09:28:58 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:01.418471 | orchestrator | 2025-02-10 09:28:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:01.418638 | orchestrator | 2025-02-10 09:29:01 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:01.422264 | orchestrator | 2025-02-10 09:29:01 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:01.422341 | orchestrator | 2025-02-10 09:29:01 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:04.489102 | orchestrator | 2025-02-10 09:29:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:04.489304 | orchestrator | 2025-02-10 09:29:04 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:04.490097 | orchestrator | 2025-02-10 09:29:04 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:04.493979 | orchestrator | 2025-02-10 09:29:04 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:07.540695 | orchestrator | 2025-02-10 09:29:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:07.540901 | orchestrator | 2025-02-10 09:29:07 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:07.541573 | orchestrator | 2025-02-10 09:29:07 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:07.541616 | orchestrator | 2025-02-10 09:29:07 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:10.594084 | orchestrator | 2025-02-10 09:29:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:10.594289 | orchestrator | 2025-02-10 09:29:10 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:10.594590 | orchestrator | 2025-02-10 09:29:10 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:10.594621 | orchestrator | 2025-02-10 09:29:10 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:13.633997 | orchestrator | 2025-02-10 09:29:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:13.634211 | orchestrator | 2025-02-10 09:29:13 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:13.635984 | orchestrator | 2025-02-10 09:29:13 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:13.639687 | orchestrator | 2025-02-10 09:29:13 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:16.690367 | orchestrator | 2025-02-10 09:29:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:16.690561 | orchestrator | 2025-02-10 09:29:16 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:16.692082 | orchestrator | 2025-02-10 09:29:16 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:16.693785 | orchestrator | 2025-02-10 09:29:16 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:19.741053 | orchestrator | 2025-02-10 09:29:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:19.741266 | orchestrator | 2025-02-10 09:29:19 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:19.743922 | orchestrator | 2025-02-10 09:29:19 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:19.744017 | orchestrator | 2025-02-10 09:29:19 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:22.791346 | orchestrator | 2025-02-10 09:29:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:22.791507 | orchestrator | 2025-02-10 09:29:22 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:22.793145 | orchestrator | 2025-02-10 09:29:22 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:22.796086 | orchestrator | 2025-02-10 09:29:22 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:25.854082 | orchestrator | 2025-02-10 09:29:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:25.854241 | orchestrator | 2025-02-10 09:29:25 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:25.855004 | orchestrator | 2025-02-10 09:29:25 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:25.857131 | orchestrator | 2025-02-10 09:29:25 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:28.909806 | orchestrator | 2025-02-10 09:29:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:28.910089 | orchestrator | 2025-02-10 09:29:28 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:28.911444 | orchestrator | 2025-02-10 09:29:28 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:28.911547 | orchestrator | 2025-02-10 09:29:28 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state STARTED 2025-02-10 09:29:31.955764 | orchestrator | 2025-02-10 09:29:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:31.955912 | orchestrator | 2025-02-10 09:29:31 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:31.957555 | orchestrator | 2025-02-10 09:29:31 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:31.958880 | orchestrator | 2025-02-10 09:29:31 | INFO  | Task 681fc7a5-ea3d-4ab4-b398-4c15d5357d95 is in state SUCCESS 2025-02-10 09:29:31.961295 | orchestrator | 2025-02-10 09:29:31.961338 | orchestrator | 2025-02-10 09:29:31.961351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:29:31.961363 | orchestrator | 2025-02-10 09:29:31.961409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:29:31.961422 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:00.354) 0:00:00.354 ******* 2025-02-10 09:29:31.961434 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:31.961448 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:29:31.961459 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:29:31.961471 | orchestrator | 2025-02-10 09:29:31.961483 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:29:31.961495 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:00.452) 0:00:00.807 ******* 2025-02-10 09:29:31.961507 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-02-10 09:29:31.961520 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-02-10 09:29:31.961533 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-02-10 09:29:31.961544 | orchestrator | 2025-02-10 09:29:31.961556 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-02-10 09:29:31.961568 | orchestrator | 2025-02-10 09:29:31.961580 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:29:31.961593 | orchestrator | Monday 10 February 2025 09:27:27 +0000 (0:00:00.477) 0:00:01.284 ******* 2025-02-10 09:29:31.961606 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:31.961618 | orchestrator | 2025-02-10 09:29:31.961630 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-02-10 09:29:31.961641 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.648) 0:00:01.932 ******* 2025-02-10 09:29:31.961681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:29:31.961692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:29:31.961704 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-10 09:29:31.961715 | orchestrator | 2025-02-10 09:29:31.961726 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-02-10 09:29:31.961737 | orchestrator | Monday 10 February 2025 09:27:29 +0000 (0:00:01.109) 0:00:03.041 ******* 2025-02-10 09:29:31.961751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.961770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.961793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.961807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.961828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.961838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.961845 | orchestrator | 2025-02-10 09:29:31.961897 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:29:31.961907 | orchestrator | Monday 10 February 2025 09:27:31 +0000 (0:00:01.915) 0:00:04.957 ******* 2025-02-10 09:29:31.961915 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:31.961924 | orchestrator | 2025-02-10 09:29:31.961932 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-02-10 09:29:31.961940 | orchestrator | Monday 10 February 2025 09:27:32 +0000 (0:00:00.949) 0:00:05.906 ******* 2025-02-10 09:29:31.961957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.961966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.961982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.961991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962063 | orchestrator | 2025-02-10 09:29:31.962071 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-02-10 09:29:31.962080 | orchestrator | Monday 10 February 2025 09:27:35 +0000 (0:00:03.753) 0:00:09.659 ******* 2025-02-10 09:29:31.962089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:31.962098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:31.962108 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:31.962123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:31.962132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:31.962146 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:31.962154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:31.962164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:31.962173 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:31.962181 | orchestrator | 2025-02-10 09:29:31.962189 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-02-10 09:29:31.962198 | orchestrator | Monday 10 February 2025 09:27:36 +0000 (0:00:01.131) 0:00:10.791 ******* 2025-02-10 09:29:31.962212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:31.962225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:31.962235 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:31.962243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:31.962252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-10 09:29:31.962261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:31.962275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-10 09:29:31.962288 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:31.962295 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:31.962303 | orchestrator | 2025-02-10 09:29:31.962310 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-02-10 09:29:31.962317 | orchestrator | Monday 10 February 2025 09:27:38 +0000 (0:00:01.787) 0:00:12.578 ******* 2025-02-10 09:29:31.962325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.962333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.962341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.962362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962391 | orchestrator | 2025-02-10 09:29:31.962399 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-02-10 09:29:31.962406 | orchestrator | Monday 10 February 2025 09:27:42 +0000 (0:00:03.670) 0:00:16.248 ******* 2025-02-10 09:29:31.962414 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:31.962421 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:31.962429 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:31.962436 | orchestrator | 2025-02-10 09:29:31.962443 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-02-10 09:29:31.962451 | orchestrator | Monday 10 February 2025 09:27:45 +0000 (0:00:03.276) 0:00:19.525 ******* 2025-02-10 09:29:31.962459 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:31.962466 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:31.962473 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:31.962485 | orchestrator | 2025-02-10 09:29:31.962492 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-02-10 09:29:31.962499 | orchestrator | Monday 10 February 2025 09:27:48 +0000 (0:00:02.940) 0:00:22.465 ******* 2025-02-10 09:29:31.962512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.962520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.962528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-10 09:29:31.962536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-10 09:29:31.962570 | orchestrator | 2025-02-10 09:29:31.962578 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:29:31.962585 | orchestrator | Monday 10 February 2025 09:27:52 +0000 (0:00:03.538) 0:00:26.003 ******* 2025-02-10 09:29:31.962593 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:31.962600 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:29:31.962607 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:29:31.962615 | orchestrator | 2025-02-10 09:29:31.962622 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-10 09:29:31.962630 | orchestrator | Monday 10 February 2025 09:27:52 +0000 (0:00:00.638) 0:00:26.641 ******* 2025-02-10 09:29:31.962637 | orchestrator | 2025-02-10 09:29:31.962645 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-10 09:29:31.962652 | orchestrator | Monday 10 February 2025 09:27:52 +0000 (0:00:00.060) 0:00:26.702 ******* 2025-02-10 09:29:31.962659 | orchestrator | 2025-02-10 09:29:31.962667 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-10 09:29:31.962674 | orchestrator | Monday 10 February 2025 09:27:53 +0000 (0:00:00.161) 0:00:26.863 ******* 2025-02-10 09:29:31.962681 | orchestrator | 2025-02-10 09:29:31.962689 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-02-10 09:29:31.962696 | orchestrator | Monday 10 February 2025 09:27:53 +0000 (0:00:00.106) 0:00:26.970 ******* 2025-02-10 09:29:31.962703 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:31.962714 | orchestrator | 2025-02-10 09:29:31.962722 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-02-10 09:29:31.962729 | orchestrator | Monday 10 February 2025 09:27:53 +0000 (0:00:00.774) 0:00:27.744 ******* 2025-02-10 09:29:31.962737 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:29:31.962744 | orchestrator | 2025-02-10 09:29:31.962752 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-02-10 09:29:31.962759 | orchestrator | Monday 10 February 2025 09:27:54 +0000 (0:00:00.475) 0:00:28.219 ******* 2025-02-10 09:29:31.962771 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:31.962779 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:31.962786 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:31.962793 | orchestrator | 2025-02-10 09:29:31.962801 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-02-10 09:29:31.962808 | orchestrator | Monday 10 February 2025 09:28:17 +0000 (0:00:23.325) 0:00:51.544 ******* 2025-02-10 09:29:31.962815 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:31.962823 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:29:31.962830 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:29:31.962838 | orchestrator | 2025-02-10 09:29:31.962845 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-10 09:29:31.962866 | orchestrator | Monday 10 February 2025 09:29:16 +0000 (0:00:59.236) 0:01:50.781 ******* 2025-02-10 09:29:31.962874 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:29:31.962881 | orchestrator | 2025-02-10 09:29:31.962889 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-02-10 09:29:31.962896 | orchestrator | Monday 10 February 2025 09:29:17 +0000 (0:00:00.796) 0:01:51.577 ******* 2025-02-10 09:29:31.962904 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:31.962911 | orchestrator | 2025-02-10 09:29:31.962919 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-02-10 09:29:31.962926 | orchestrator | Monday 10 February 2025 09:29:20 +0000 (0:00:02.829) 0:01:54.407 ******* 2025-02-10 09:29:31.962933 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:29:31.962941 | orchestrator | 2025-02-10 09:29:31.962948 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-02-10 09:29:31.962956 | orchestrator | Monday 10 February 2025 09:29:23 +0000 (0:00:02.896) 0:01:57.303 ******* 2025-02-10 09:29:31.962963 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:31.962971 | orchestrator | 2025-02-10 09:29:31.962978 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-02-10 09:29:31.962985 | orchestrator | Monday 10 February 2025 09:29:26 +0000 (0:00:03.102) 0:02:00.405 ******* 2025-02-10 09:29:31.962993 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:29:31.963000 | orchestrator | 2025-02-10 09:29:31.963007 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:29:31.963015 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:29:31.963035 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:29:35.007001 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:29:35.007160 | orchestrator | 2025-02-10 09:29:35.007182 | orchestrator | 2025-02-10 09:29:35.007197 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:29:35.007212 | orchestrator | Monday 10 February 2025 09:29:29 +0000 (0:00:03.126) 0:02:03.531 ******* 2025-02-10 09:29:35.007225 | orchestrator | =============================================================================== 2025-02-10 09:29:35.007240 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 59.24s 2025-02-10 09:29:35.007253 | orchestrator | opensearch : Restart opensearch container ------------------------------ 23.33s 2025-02-10 09:29:35.007266 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.75s 2025-02-10 09:29:35.007280 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.67s 2025-02-10 09:29:35.007289 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.54s 2025-02-10 09:29:35.007297 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.28s 2025-02-10 09:29:35.007339 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.13s 2025-02-10 09:29:35.007348 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.10s 2025-02-10 09:29:35.007356 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.94s 2025-02-10 09:29:35.007364 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.90s 2025-02-10 09:29:35.007372 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.83s 2025-02-10 09:29:35.007380 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.92s 2025-02-10 09:29:35.007388 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.79s 2025-02-10 09:29:35.007396 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.13s 2025-02-10 09:29:35.007405 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.11s 2025-02-10 09:29:35.007413 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.95s 2025-02-10 09:29:35.007421 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.80s 2025-02-10 09:29:35.007429 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.77s 2025-02-10 09:29:35.007437 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.65s 2025-02-10 09:29:35.007446 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2025-02-10 09:29:35.007454 | orchestrator | 2025-02-10 09:29:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:35.007479 | orchestrator | 2025-02-10 09:29:35 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:35.007718 | orchestrator | 2025-02-10 09:29:35 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:38.052125 | orchestrator | 2025-02-10 09:29:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:38.052290 | orchestrator | 2025-02-10 09:29:38 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:38.052562 | orchestrator | 2025-02-10 09:29:38 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:41.100413 | orchestrator | 2025-02-10 09:29:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:41.100645 | orchestrator | 2025-02-10 09:29:41 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:41.101346 | orchestrator | 2025-02-10 09:29:41 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:44.149384 | orchestrator | 2025-02-10 09:29:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:44.149547 | orchestrator | 2025-02-10 09:29:44 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:44.149635 | orchestrator | 2025-02-10 09:29:44 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:47.193054 | orchestrator | 2025-02-10 09:29:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:47.193219 | orchestrator | 2025-02-10 09:29:47 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:47.194455 | orchestrator | 2025-02-10 09:29:47 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:50.246324 | orchestrator | 2025-02-10 09:29:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:50.246485 | orchestrator | 2025-02-10 09:29:50 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:53.292043 | orchestrator | 2025-02-10 09:29:50 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:53.292225 | orchestrator | 2025-02-10 09:29:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:53.292264 | orchestrator | 2025-02-10 09:29:53 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:53.292636 | orchestrator | 2025-02-10 09:29:53 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:56.331478 | orchestrator | 2025-02-10 09:29:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:56.331671 | orchestrator | 2025-02-10 09:29:56 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:56.331770 | orchestrator | 2025-02-10 09:29:56 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:29:59.393082 | orchestrator | 2025-02-10 09:29:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:29:59.393248 | orchestrator | 2025-02-10 09:29:59 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:29:59.394340 | orchestrator | 2025-02-10 09:29:59 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:02.444545 | orchestrator | 2025-02-10 09:29:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:02.444695 | orchestrator | 2025-02-10 09:30:02 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:05.502187 | orchestrator | 2025-02-10 09:30:02 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:05.502332 | orchestrator | 2025-02-10 09:30:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:05.502369 | orchestrator | 2025-02-10 09:30:05 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:08.545121 | orchestrator | 2025-02-10 09:30:05 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:08.545244 | orchestrator | 2025-02-10 09:30:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:08.545272 | orchestrator | 2025-02-10 09:30:08 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:11.594587 | orchestrator | 2025-02-10 09:30:08 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:11.594765 | orchestrator | 2025-02-10 09:30:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:11.594826 | orchestrator | 2025-02-10 09:30:11 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:14.640718 | orchestrator | 2025-02-10 09:30:11 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:14.640924 | orchestrator | 2025-02-10 09:30:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:14.640972 | orchestrator | 2025-02-10 09:30:14 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:17.682406 | orchestrator | 2025-02-10 09:30:14 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:17.682567 | orchestrator | 2025-02-10 09:30:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:17.682651 | orchestrator | 2025-02-10 09:30:17 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:20.727316 | orchestrator | 2025-02-10 09:30:17 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:20.727456 | orchestrator | 2025-02-10 09:30:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:20.727496 | orchestrator | 2025-02-10 09:30:20 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:23.776487 | orchestrator | 2025-02-10 09:30:20 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:23.776641 | orchestrator | 2025-02-10 09:30:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:23.776678 | orchestrator | 2025-02-10 09:30:23 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:23.779613 | orchestrator | 2025-02-10 09:30:23 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:26.835231 | orchestrator | 2025-02-10 09:30:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:26.835383 | orchestrator | 2025-02-10 09:30:26 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:26.837752 | orchestrator | 2025-02-10 09:30:26 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:26.838371 | orchestrator | 2025-02-10 09:30:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:29.888197 | orchestrator | 2025-02-10 09:30:29 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:29.888392 | orchestrator | 2025-02-10 09:30:29 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:32.933967 | orchestrator | 2025-02-10 09:30:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:32.934145 | orchestrator | 2025-02-10 09:30:32 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:32.934209 | orchestrator | 2025-02-10 09:30:32 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:35.975648 | orchestrator | 2025-02-10 09:30:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:35.975815 | orchestrator | 2025-02-10 09:30:35 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:35.977093 | orchestrator | 2025-02-10 09:30:35 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:35.977575 | orchestrator | 2025-02-10 09:30:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:39.029360 | orchestrator | 2025-02-10 09:30:39 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:39.029579 | orchestrator | 2025-02-10 09:30:39 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:39.029789 | orchestrator | 2025-02-10 09:30:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:42.083664 | orchestrator | 2025-02-10 09:30:42 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:45.127844 | orchestrator | 2025-02-10 09:30:42 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:45.127975 | orchestrator | 2025-02-10 09:30:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:45.127996 | orchestrator | 2025-02-10 09:30:45 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:48.175128 | orchestrator | 2025-02-10 09:30:45 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:48.175238 | orchestrator | 2025-02-10 09:30:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:48.175260 | orchestrator | 2025-02-10 09:30:48 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:48.175716 | orchestrator | 2025-02-10 09:30:48 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:51.226487 | orchestrator | 2025-02-10 09:30:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:51.226646 | orchestrator | 2025-02-10 09:30:51 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:51.227711 | orchestrator | 2025-02-10 09:30:51 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:51.227945 | orchestrator | 2025-02-10 09:30:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:54.282482 | orchestrator | 2025-02-10 09:30:54 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:54.283051 | orchestrator | 2025-02-10 09:30:54 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:30:54.283150 | orchestrator | 2025-02-10 09:30:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:30:57.322705 | orchestrator | 2025-02-10 09:30:57 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:30:57.324084 | orchestrator | 2025-02-10 09:30:57 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:00.352829 | orchestrator | 2025-02-10 09:30:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:00.353038 | orchestrator | 2025-02-10 09:31:00 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:00.353397 | orchestrator | 2025-02-10 09:31:00 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:03.389530 | orchestrator | 2025-02-10 09:31:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:03.389651 | orchestrator | 2025-02-10 09:31:03 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:06.426545 | orchestrator | 2025-02-10 09:31:03 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:06.426725 | orchestrator | 2025-02-10 09:31:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:06.426790 | orchestrator | 2025-02-10 09:31:06 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:06.426881 | orchestrator | 2025-02-10 09:31:06 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:09.467077 | orchestrator | 2025-02-10 09:31:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:09.467231 | orchestrator | 2025-02-10 09:31:09 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:09.467717 | orchestrator | 2025-02-10 09:31:09 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:12.505585 | orchestrator | 2025-02-10 09:31:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:12.505856 | orchestrator | 2025-02-10 09:31:12 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:12.506219 | orchestrator | 2025-02-10 09:31:12 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:15.551750 | orchestrator | 2025-02-10 09:31:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:15.551989 | orchestrator | 2025-02-10 09:31:15 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:18.593994 | orchestrator | 2025-02-10 09:31:15 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:18.595175 | orchestrator | 2025-02-10 09:31:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:18.595259 | orchestrator | 2025-02-10 09:31:18 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:21.636971 | orchestrator | 2025-02-10 09:31:18 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:21.637308 | orchestrator | 2025-02-10 09:31:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:21.637358 | orchestrator | 2025-02-10 09:31:21 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state STARTED 2025-02-10 09:31:21.637461 | orchestrator | 2025-02-10 09:31:21 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:24.708672 | orchestrator | 2025-02-10 09:31:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:24.708830 | orchestrator | 2025-02-10 09:31:24 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:24.714483 | orchestrator | 2025-02-10 09:31:24 | INFO  | Task b836a833-2f9e-4a74-b9f4-1010374d03ae is in state SUCCESS 2025-02-10 09:31:24.715408 | orchestrator | 2025-02-10 09:31:24 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:24.715466 | orchestrator | 2025-02-10 09:31:24.715483 | orchestrator | 2025-02-10 09:31:24.715498 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-02-10 09:31:24.715523 | orchestrator | 2025-02-10 09:31:24.715538 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-10 09:31:24.715552 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:00.083) 0:00:00.083 ******* 2025-02-10 09:31:24.715566 | orchestrator | ok: [localhost] => { 2025-02-10 09:31:24.715583 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-02-10 09:31:24.715597 | orchestrator | } 2025-02-10 09:31:24.715611 | orchestrator | 2025-02-10 09:31:24.715625 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-02-10 09:31:24.715639 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:00.048) 0:00:00.131 ******* 2025-02-10 09:31:24.715654 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-02-10 09:31:24.715669 | orchestrator | ...ignoring 2025-02-10 09:31:24.715683 | orchestrator | 2025-02-10 09:31:24.715697 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-02-10 09:31:24.715711 | orchestrator | Monday 10 February 2025 09:27:27 +0000 (0:00:01.581) 0:00:01.713 ******* 2025-02-10 09:31:24.715725 | orchestrator | skipping: [localhost] 2025-02-10 09:31:24.715739 | orchestrator | 2025-02-10 09:31:24.715753 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-02-10 09:31:24.715767 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.078) 0:00:01.792 ******* 2025-02-10 09:31:24.715781 | orchestrator | ok: [localhost] 2025-02-10 09:31:24.715794 | orchestrator | 2025-02-10 09:31:24.715809 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:31:24.715823 | orchestrator | 2025-02-10 09:31:24.715842 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:31:24.715857 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.143) 0:00:01.935 ******* 2025-02-10 09:31:24.715871 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.715885 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.715926 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.715940 | orchestrator | 2025-02-10 09:31:24.715954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:31:24.715968 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.518) 0:00:02.454 ******* 2025-02-10 09:31:24.715982 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-10 09:31:24.715998 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-10 09:31:24.716014 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-10 09:31:24.716030 | orchestrator | 2025-02-10 09:31:24.716046 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-10 09:31:24.716062 | orchestrator | 2025-02-10 09:31:24.716078 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-10 09:31:24.716120 | orchestrator | Monday 10 February 2025 09:27:29 +0000 (0:00:00.648) 0:00:03.102 ******* 2025-02-10 09:31:24.716137 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:31:24.716152 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:31:24.716168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:31:24.716183 | orchestrator | 2025-02-10 09:31:24.716198 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:31:24.716214 | orchestrator | Monday 10 February 2025 09:27:29 +0000 (0:00:00.478) 0:00:03.581 ******* 2025-02-10 09:31:24.716229 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:24.716246 | orchestrator | 2025-02-10 09:31:24.716262 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-02-10 09:31:24.716277 | orchestrator | Monday 10 February 2025 09:27:30 +0000 (0:00:00.773) 0:00:04.355 ******* 2025-02-10 09:31:24.716310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.716332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.716358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.716383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.716400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.716414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.716435 | orchestrator | 2025-02-10 09:31:24.716449 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-02-10 09:31:24.716463 | orchestrator | Monday 10 February 2025 09:27:35 +0000 (0:00:05.370) 0:00:09.726 ******* 2025-02-10 09:31:24.716477 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.716493 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.716507 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.716520 | orchestrator | 2025-02-10 09:31:24.716535 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-02-10 09:31:24.716548 | orchestrator | Monday 10 February 2025 09:27:36 +0000 (0:00:00.944) 0:00:10.670 ******* 2025-02-10 09:31:24.716562 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.716577 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.716590 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.716605 | orchestrator | 2025-02-10 09:31:24.716618 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-02-10 09:31:24.716632 | orchestrator | Monday 10 February 2025 09:27:38 +0000 (0:00:01.939) 0:00:12.609 ******* 2025-02-10 09:31:24.716653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.716670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.716693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.716785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.716813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.716837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.716874 | orchestrator | 2025-02-10 09:31:24.716926 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-02-10 09:31:24.716952 | orchestrator | Monday 10 February 2025 09:27:45 +0000 (0:00:06.437) 0:00:19.046 ******* 2025-02-10 09:31:24.716976 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.717000 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.717023 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.717044 | orchestrator | 2025-02-10 09:31:24.717059 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-02-10 09:31:24.717073 | orchestrator | Monday 10 February 2025 09:27:46 +0000 (0:00:01.114) 0:00:20.160 ******* 2025-02-10 09:31:24.717087 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:24.717101 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:24.717115 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.717129 | orchestrator | 2025-02-10 09:31:24.717143 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-02-10 09:31:24.717157 | orchestrator | Monday 10 February 2025 09:27:57 +0000 (0:00:11.221) 0:00:31.382 ******* 2025-02-10 09:31:24.717172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.717198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.717242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-10 09:31:24.717258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.717280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.717303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-10 09:31:24.717318 | orchestrator | 2025-02-10 09:31:24.717332 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-02-10 09:31:24.717346 | orchestrator | Monday 10 February 2025 09:28:04 +0000 (0:00:06.521) 0:00:37.903 ******* 2025-02-10 09:31:24.717360 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.717374 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:24.717388 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:24.717402 | orchestrator | 2025-02-10 09:31:24.717416 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-02-10 09:31:24.717431 | orchestrator | Monday 10 February 2025 09:28:05 +0000 (0:00:01.323) 0:00:39.226 ******* 2025-02-10 09:31:24.717445 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.717460 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.717474 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.717488 | orchestrator | 2025-02-10 09:31:24.717502 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-02-10 09:31:24.717516 | orchestrator | Monday 10 February 2025 09:28:05 +0000 (0:00:00.498) 0:00:39.725 ******* 2025-02-10 09:31:24.717530 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.717544 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.717558 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.717572 | orchestrator | 2025-02-10 09:31:24.717586 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-02-10 09:31:24.717601 | orchestrator | Monday 10 February 2025 09:28:06 +0000 (0:00:00.471) 0:00:40.196 ******* 2025-02-10 09:31:24.717615 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-02-10 09:31:24.717629 | orchestrator | ...ignoring 2025-02-10 09:31:24.717643 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-02-10 09:31:24.717657 | orchestrator | ...ignoring 2025-02-10 09:31:24.717672 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-02-10 09:31:24.717686 | orchestrator | ...ignoring 2025-02-10 09:31:24.717700 | orchestrator | 2025-02-10 09:31:24.717714 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-02-10 09:31:24.717733 | orchestrator | Monday 10 February 2025 09:28:17 +0000 (0:00:10.894) 0:00:51.091 ******* 2025-02-10 09:31:24.717747 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.717761 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.717775 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.717788 | orchestrator | 2025-02-10 09:31:24.717802 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-02-10 09:31:24.717816 | orchestrator | Monday 10 February 2025 09:28:18 +0000 (0:00:00.681) 0:00:51.772 ******* 2025-02-10 09:31:24.717830 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.717844 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.717862 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.717886 | orchestrator | 2025-02-10 09:31:24.717936 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-02-10 09:31:24.717961 | orchestrator | Monday 10 February 2025 09:28:19 +0000 (0:00:01.208) 0:00:52.980 ******* 2025-02-10 09:31:24.718008 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.718111 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.718139 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.718164 | orchestrator | 2025-02-10 09:31:24.718187 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-02-10 09:31:24.718212 | orchestrator | Monday 10 February 2025 09:28:20 +0000 (0:00:01.479) 0:00:54.460 ******* 2025-02-10 09:31:24.718237 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.718263 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.718289 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.718314 | orchestrator | 2025-02-10 09:31:24.718328 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-02-10 09:31:24.718343 | orchestrator | Monday 10 February 2025 09:28:21 +0000 (0:00:00.800) 0:00:55.262 ******* 2025-02-10 09:31:24.718356 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.718371 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.718386 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.718411 | orchestrator | 2025-02-10 09:31:24.718426 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-02-10 09:31:24.718440 | orchestrator | Monday 10 February 2025 09:28:22 +0000 (0:00:00.701) 0:00:55.963 ******* 2025-02-10 09:31:24.718455 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.718481 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.718496 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.718510 | orchestrator | 2025-02-10 09:31:24.718525 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:31:24.718539 | orchestrator | Monday 10 February 2025 09:28:23 +0000 (0:00:00.904) 0:00:56.868 ******* 2025-02-10 09:31:24.718553 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.718567 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.718581 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-02-10 09:31:24.718595 | orchestrator | 2025-02-10 09:31:24.718608 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-02-10 09:31:24.718622 | orchestrator | Monday 10 February 2025 09:28:23 +0000 (0:00:00.640) 0:00:57.508 ******* 2025-02-10 09:31:24.718636 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.718649 | orchestrator | 2025-02-10 09:31:24.718663 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-02-10 09:31:24.718677 | orchestrator | Monday 10 February 2025 09:28:36 +0000 (0:00:12.891) 0:01:10.400 ******* 2025-02-10 09:31:24.718691 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.718705 | orchestrator | 2025-02-10 09:31:24.718719 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:31:24.718733 | orchestrator | Monday 10 February 2025 09:28:36 +0000 (0:00:00.136) 0:01:10.536 ******* 2025-02-10 09:31:24.718746 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.718760 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.718774 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.718788 | orchestrator | 2025-02-10 09:31:24.718802 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-02-10 09:31:24.718816 | orchestrator | Monday 10 February 2025 09:28:38 +0000 (0:00:01.333) 0:01:11.870 ******* 2025-02-10 09:31:24.718829 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.718843 | orchestrator | 2025-02-10 09:31:24.718857 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-02-10 09:31:24.718871 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:10.480) 0:01:22.350 ******* 2025-02-10 09:31:24.718885 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-02-10 09:31:24.718933 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.718948 | orchestrator | 2025-02-10 09:31:24.718970 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-02-10 09:31:24.719008 | orchestrator | Monday 10 February 2025 09:28:55 +0000 (0:00:07.209) 0:01:29.560 ******* 2025-02-10 09:31:24.719032 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.719057 | orchestrator | 2025-02-10 09:31:24.719080 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-02-10 09:31:24.719099 | orchestrator | Monday 10 February 2025 09:28:58 +0000 (0:00:03.134) 0:01:32.694 ******* 2025-02-10 09:31:24.719113 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.719127 | orchestrator | 2025-02-10 09:31:24.719141 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-02-10 09:31:24.719154 | orchestrator | Monday 10 February 2025 09:28:59 +0000 (0:00:00.111) 0:01:32.806 ******* 2025-02-10 09:31:24.719168 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.719182 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.719196 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.719210 | orchestrator | 2025-02-10 09:31:24.719224 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-02-10 09:31:24.719238 | orchestrator | Monday 10 February 2025 09:28:59 +0000 (0:00:00.495) 0:01:33.301 ******* 2025-02-10 09:31:24.719252 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.719266 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:24.719279 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:24.719293 | orchestrator | 2025-02-10 09:31:24.719314 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-02-10 09:31:24.719329 | orchestrator | Monday 10 February 2025 09:29:00 +0000 (0:00:00.526) 0:01:33.828 ******* 2025-02-10 09:31:24.719343 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-10 09:31:24.719356 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.719370 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:24.719384 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:24.719398 | orchestrator | 2025-02-10 09:31:24.719412 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-10 09:31:24.719426 | orchestrator | skipping: no hosts matched 2025-02-10 09:31:24.719440 | orchestrator | 2025-02-10 09:31:24.719454 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:31:24.719468 | orchestrator | 2025-02-10 09:31:24.719483 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-10 09:31:24.719497 | orchestrator | Monday 10 February 2025 09:29:21 +0000 (0:00:21.254) 0:01:55.083 ******* 2025-02-10 09:31:24.719511 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:24.719525 | orchestrator | 2025-02-10 09:31:24.719538 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-10 09:31:24.719552 | orchestrator | Monday 10 February 2025 09:29:37 +0000 (0:00:16.553) 0:02:11.637 ******* 2025-02-10 09:31:24.719566 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.719580 | orchestrator | 2025-02-10 09:31:24.719594 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-10 09:31:24.719608 | orchestrator | Monday 10 February 2025 09:29:58 +0000 (0:00:20.633) 0:02:32.270 ******* 2025-02-10 09:31:24.719625 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.719648 | orchestrator | 2025-02-10 09:31:24.719670 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:31:24.719693 | orchestrator | 2025-02-10 09:31:24.719718 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-10 09:31:24.719740 | orchestrator | Monday 10 February 2025 09:30:01 +0000 (0:00:03.362) 0:02:35.633 ******* 2025-02-10 09:31:24.719762 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:24.719776 | orchestrator | 2025-02-10 09:31:24.719790 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-10 09:31:24.719812 | orchestrator | Monday 10 February 2025 09:30:19 +0000 (0:00:17.439) 0:02:53.072 ******* 2025-02-10 09:31:24.719826 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.719841 | orchestrator | 2025-02-10 09:31:24.719855 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-10 09:31:24.719878 | orchestrator | Monday 10 February 2025 09:30:39 +0000 (0:00:20.614) 0:03:13.686 ******* 2025-02-10 09:31:24.719915 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.719930 | orchestrator | 2025-02-10 09:31:24.719944 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-10 09:31:24.719958 | orchestrator | 2025-02-10 09:31:24.719972 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-10 09:31:24.719986 | orchestrator | Monday 10 February 2025 09:30:43 +0000 (0:00:03.254) 0:03:16.941 ******* 2025-02-10 09:31:24.719999 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.720013 | orchestrator | 2025-02-10 09:31:24.720027 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-10 09:31:24.720041 | orchestrator | Monday 10 February 2025 09:30:57 +0000 (0:00:14.818) 0:03:31.760 ******* 2025-02-10 09:31:24.720055 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.720068 | orchestrator | 2025-02-10 09:31:24.720082 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-10 09:31:24.720096 | orchestrator | Monday 10 February 2025 09:31:02 +0000 (0:00:04.597) 0:03:36.358 ******* 2025-02-10 09:31:24.720110 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.720123 | orchestrator | 2025-02-10 09:31:24.720137 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-10 09:31:24.720151 | orchestrator | 2025-02-10 09:31:24.720165 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-10 09:31:24.720178 | orchestrator | Monday 10 February 2025 09:31:05 +0000 (0:00:03.184) 0:03:39.542 ******* 2025-02-10 09:31:24.720192 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:24.720206 | orchestrator | 2025-02-10 09:31:24.720219 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-02-10 09:31:24.720233 | orchestrator | Monday 10 February 2025 09:31:06 +0000 (0:00:00.668) 0:03:40.211 ******* 2025-02-10 09:31:24.720247 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.720261 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.720274 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.720288 | orchestrator | 2025-02-10 09:31:24.720308 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-02-10 09:31:24.720323 | orchestrator | Monday 10 February 2025 09:31:09 +0000 (0:00:02.774) 0:03:42.985 ******* 2025-02-10 09:31:24.720337 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.720351 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.720365 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.720379 | orchestrator | 2025-02-10 09:31:24.720393 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-02-10 09:31:24.720406 | orchestrator | Monday 10 February 2025 09:31:11 +0000 (0:00:02.645) 0:03:45.631 ******* 2025-02-10 09:31:24.720420 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.720434 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.720448 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.720462 | orchestrator | 2025-02-10 09:31:24.720476 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-02-10 09:31:24.720490 | orchestrator | Monday 10 February 2025 09:31:14 +0000 (0:00:02.574) 0:03:48.206 ******* 2025-02-10 09:31:24.720504 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.720517 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.720531 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:24.720545 | orchestrator | 2025-02-10 09:31:24.720559 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-02-10 09:31:24.720573 | orchestrator | Monday 10 February 2025 09:31:17 +0000 (0:00:02.696) 0:03:50.903 ******* 2025-02-10 09:31:24.720587 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:24.720601 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:24.720616 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:24.720629 | orchestrator | 2025-02-10 09:31:24.720643 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-10 09:31:24.720664 | orchestrator | Monday 10 February 2025 09:31:21 +0000 (0:00:04.025) 0:03:54.928 ******* 2025-02-10 09:31:24.720678 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:24.720692 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:24.720712 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:24.720726 | orchestrator | 2025-02-10 09:31:24.720739 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:31:24.720754 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-10 09:31:24.720774 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-02-10 09:31:24.720789 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-02-10 09:31:24.720804 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-02-10 09:31:24.720818 | orchestrator | 2025-02-10 09:31:24.720832 | orchestrator | 2025-02-10 09:31:24.720846 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:31:24.720859 | orchestrator | Monday 10 February 2025 09:31:21 +0000 (0:00:00.292) 0:03:55.220 ******* 2025-02-10 09:31:24.720873 | orchestrator | =============================================================================== 2025-02-10 09:31:24.720888 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.25s 2025-02-10 09:31:24.720958 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.99s 2025-02-10 09:31:24.720980 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 21.25s 2025-02-10 09:31:27.789495 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.82s 2025-02-10 09:31:27.789640 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 12.89s 2025-02-10 09:31:27.789660 | orchestrator | mariadb : Copying over galera.cnf -------------------------------------- 11.22s 2025-02-10 09:31:27.789675 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2025-02-10 09:31:27.789689 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.48s 2025-02-10 09:31:27.789703 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.21s 2025-02-10 09:31:27.789717 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 6.62s 2025-02-10 09:31:27.789731 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 6.52s 2025-02-10 09:31:27.789745 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.44s 2025-02-10 09:31:27.789759 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 5.37s 2025-02-10 09:31:27.789773 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.60s 2025-02-10 09:31:27.789787 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.02s 2025-02-10 09:31:27.789800 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.18s 2025-02-10 09:31:27.789815 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.13s 2025-02-10 09:31:27.789829 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.77s 2025-02-10 09:31:27.789868 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.70s 2025-02-10 09:31:27.789882 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.65s 2025-02-10 09:31:27.789947 | orchestrator | 2025-02-10 09:31:24 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:27.789963 | orchestrator | 2025-02-10 09:31:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:27.790085 | orchestrator | 2025-02-10 09:31:27 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:30.841882 | orchestrator | 2025-02-10 09:31:27 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state STARTED 2025-02-10 09:31:30.842211 | orchestrator | 2025-02-10 09:31:27 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:30.842245 | orchestrator | 2025-02-10 09:31:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:30.842292 | orchestrator | 2025-02-10 09:31:30 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:30.846788 | orchestrator | 2025-02-10 09:31:30 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:30.847069 | orchestrator | 2025-02-10 09:31:30 | INFO  | Task 7647509d-e8ff-4cff-9d70-878fd001ac2a is in state SUCCESS 2025-02-10 09:31:30.848415 | orchestrator | 2025-02-10 09:31:30.848537 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:31:30.848557 | orchestrator | 2025-02-10 09:31:30.848575 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-02-10 09:31:30.848594 | orchestrator | 2025-02-10 09:31:30.848611 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-10 09:31:30.848629 | orchestrator | Monday 10 February 2025 09:17:08 +0000 (0:00:01.601) 0:00:01.601 ******* 2025-02-10 09:31:30.848648 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.848666 | orchestrator | 2025-02-10 09:31:30.848683 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-10 09:31:30.848701 | orchestrator | Monday 10 February 2025 09:17:09 +0000 (0:00:01.113) 0:00:02.714 ******* 2025-02-10 09:31:30.848719 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:31:30.848730 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:31:30.848750 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:31:30.848761 | orchestrator | 2025-02-10 09:31:30.848771 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-10 09:31:30.848787 | orchestrator | Monday 10 February 2025 09:17:10 +0000 (0:00:00.633) 0:00:03.348 ******* 2025-02-10 09:31:30.848804 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.848820 | orchestrator | 2025-02-10 09:31:30.848837 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-10 09:31:30.848979 | orchestrator | Monday 10 February 2025 09:17:11 +0000 (0:00:01.490) 0:00:04.838 ******* 2025-02-10 09:31:30.848998 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.849010 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.849028 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.849047 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.849063 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.849078 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.849129 | orchestrator | 2025-02-10 09:31:30.849149 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-10 09:31:30.849164 | orchestrator | Monday 10 February 2025 09:17:12 +0000 (0:00:01.477) 0:00:06.316 ******* 2025-02-10 09:31:30.849176 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.849187 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.849199 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.849210 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.849222 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.849233 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.849245 | orchestrator | 2025-02-10 09:31:30.849286 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-10 09:31:30.849298 | orchestrator | Monday 10 February 2025 09:17:14 +0000 (0:00:01.133) 0:00:07.449 ******* 2025-02-10 09:31:30.849309 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.849321 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.849333 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.849343 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.849354 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.849366 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.849376 | orchestrator | 2025-02-10 09:31:30.849387 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-10 09:31:30.849397 | orchestrator | Monday 10 February 2025 09:17:15 +0000 (0:00:01.269) 0:00:08.718 ******* 2025-02-10 09:31:30.849407 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.849417 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.849427 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.849437 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.849447 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.849468 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.849478 | orchestrator | 2025-02-10 09:31:30.849496 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-10 09:31:30.849507 | orchestrator | Monday 10 February 2025 09:17:16 +0000 (0:00:01.243) 0:00:09.962 ******* 2025-02-10 09:31:30.849568 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.849579 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.849589 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.849599 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.849609 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.849619 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.849629 | orchestrator | 2025-02-10 09:31:30.849640 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-10 09:31:30.849670 | orchestrator | Monday 10 February 2025 09:17:17 +0000 (0:00:00.736) 0:00:10.699 ******* 2025-02-10 09:31:30.849688 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.849706 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.849722 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.849737 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.849754 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.849771 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.849788 | orchestrator | 2025-02-10 09:31:30.849806 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-10 09:31:30.849821 | orchestrator | Monday 10 February 2025 09:17:18 +0000 (0:00:01.146) 0:00:11.845 ******* 2025-02-10 09:31:30.849831 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.849843 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.849865 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.849875 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.849885 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.849951 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.849965 | orchestrator | 2025-02-10 09:31:30.849975 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-10 09:31:30.849985 | orchestrator | Monday 10 February 2025 09:17:19 +0000 (0:00:01.193) 0:00:13.038 ******* 2025-02-10 09:31:30.849996 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.850006 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.850063 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.850076 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.850086 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.850097 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.850107 | orchestrator | 2025-02-10 09:31:30.850130 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-10 09:31:30.850149 | orchestrator | Monday 10 February 2025 09:17:20 +0000 (0:00:00.923) 0:00:13.962 ******* 2025-02-10 09:31:30.850166 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:31:30.850182 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:31:30.850211 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:31:30.850227 | orchestrator | 2025-02-10 09:31:30.850245 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-10 09:31:30.850270 | orchestrator | Monday 10 February 2025 09:17:21 +0000 (0:00:01.014) 0:00:14.977 ******* 2025-02-10 09:31:30.850288 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.850305 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.850330 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.850348 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.850366 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.850377 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.850387 | orchestrator | 2025-02-10 09:31:30.850397 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-10 09:31:30.850408 | orchestrator | Monday 10 February 2025 09:17:22 +0000 (0:00:01.304) 0:00:16.282 ******* 2025-02-10 09:31:30.850419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:31:30.850437 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:31:30.850454 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:31:30.850470 | orchestrator | 2025-02-10 09:31:30.850487 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-10 09:31:30.850505 | orchestrator | Monday 10 February 2025 09:17:25 +0000 (0:00:03.043) 0:00:19.325 ******* 2025-02-10 09:31:30.850523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:31:30.850534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:31:30.850545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:31:30.850555 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.850566 | orchestrator | 2025-02-10 09:31:30.850576 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-10 09:31:30.850586 | orchestrator | Monday 10 February 2025 09:17:26 +0000 (0:00:00.916) 0:00:20.242 ******* 2025-02-10 09:31:30.850598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850633 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.850643 | orchestrator | 2025-02-10 09:31:30.850653 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-10 09:31:30.850663 | orchestrator | Monday 10 February 2025 09:17:28 +0000 (0:00:01.636) 0:00:21.879 ******* 2025-02-10 09:31:30.850675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850722 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.850732 | orchestrator | 2025-02-10 09:31:30.850742 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-10 09:31:30.850770 | orchestrator | Monday 10 February 2025 09:17:28 +0000 (0:00:00.324) 0:00:22.204 ******* 2025-02-10 09:31:30.850784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-10 09:17:23.722692', 'end': '2025-02-10 09:17:23.954941', 'delta': '0:00:00.232249', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-10 09:17:24.574801', 'end': '2025-02-10 09:17:24.818790', 'delta': '0:00:00.243989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-10 09:17:25.489724', 'end': '2025-02-10 09:17:25.785026', 'delta': '0:00:00.295302', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:31:30.850837 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.850853 | orchestrator | 2025-02-10 09:31:30.850871 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-10 09:31:30.850888 | orchestrator | Monday 10 February 2025 09:17:29 +0000 (0:00:00.214) 0:00:22.418 ******* 2025-02-10 09:31:30.850925 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.850936 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.850946 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.850957 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.850967 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.851008 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.851019 | orchestrator | 2025-02-10 09:31:30.851029 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-10 09:31:30.851049 | orchestrator | Monday 10 February 2025 09:17:30 +0000 (0:00:01.168) 0:00:23.586 ******* 2025-02-10 09:31:30.851067 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.851080 | orchestrator | 2025-02-10 09:31:30.851098 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-10 09:31:30.851183 | orchestrator | Monday 10 February 2025 09:17:30 +0000 (0:00:00.607) 0:00:24.194 ******* 2025-02-10 09:31:30.851196 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851212 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851222 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851233 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851243 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.851253 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.851263 | orchestrator | 2025-02-10 09:31:30.851273 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-10 09:31:30.851288 | orchestrator | Monday 10 February 2025 09:17:31 +0000 (0:00:00.847) 0:00:25.041 ******* 2025-02-10 09:31:30.851306 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851322 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851339 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851356 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851373 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.851390 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.851400 | orchestrator | 2025-02-10 09:31:30.851411 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:31:30.851421 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:01.571) 0:00:26.613 ******* 2025-02-10 09:31:30.851431 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851441 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851451 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851461 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851471 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.851482 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.851492 | orchestrator | 2025-02-10 09:31:30.851503 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-10 09:31:30.851521 | orchestrator | Monday 10 February 2025 09:17:33 +0000 (0:00:00.721) 0:00:27.334 ******* 2025-02-10 09:31:30.851532 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851542 | orchestrator | 2025-02-10 09:31:30.851552 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-10 09:31:30.851562 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.497) 0:00:27.831 ******* 2025-02-10 09:31:30.851572 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851582 | orchestrator | 2025-02-10 09:31:30.851592 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:31:30.851603 | orchestrator | Monday 10 February 2025 09:17:34 +0000 (0:00:00.281) 0:00:28.113 ******* 2025-02-10 09:31:30.851613 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851623 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851633 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851643 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851653 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.851663 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.851673 | orchestrator | 2025-02-10 09:31:30.851683 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-10 09:31:30.851693 | orchestrator | Monday 10 February 2025 09:17:35 +0000 (0:00:00.957) 0:00:29.071 ******* 2025-02-10 09:31:30.851703 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851714 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851724 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851733 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851743 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.851753 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.851763 | orchestrator | 2025-02-10 09:31:30.851774 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-10 09:31:30.851792 | orchestrator | Monday 10 February 2025 09:17:36 +0000 (0:00:01.219) 0:00:30.290 ******* 2025-02-10 09:31:30.851803 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851813 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851823 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851833 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851843 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.851863 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.851873 | orchestrator | 2025-02-10 09:31:30.851884 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-10 09:31:30.851913 | orchestrator | Monday 10 February 2025 09:17:37 +0000 (0:00:00.796) 0:00:31.086 ******* 2025-02-10 09:31:30.851932 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.851948 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.851965 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.851981 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.851998 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.852016 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.852030 | orchestrator | 2025-02-10 09:31:30.852041 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-10 09:31:30.852051 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:01.310) 0:00:32.397 ******* 2025-02-10 09:31:30.852061 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.852071 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.852082 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.852092 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.852101 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.852112 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.852122 | orchestrator | 2025-02-10 09:31:30.852132 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-10 09:31:30.852142 | orchestrator | Monday 10 February 2025 09:17:39 +0000 (0:00:00.852) 0:00:33.250 ******* 2025-02-10 09:31:30.852153 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.852163 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.852173 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.852183 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.852193 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.852203 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.852213 | orchestrator | 2025-02-10 09:31:30.852223 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-10 09:31:30.852234 | orchestrator | Monday 10 February 2025 09:17:40 +0000 (0:00:01.052) 0:00:34.302 ******* 2025-02-10 09:31:30.852244 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.852254 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.852264 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.852274 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.852285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.852295 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.852305 | orchestrator | 2025-02-10 09:31:30.852315 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-10 09:31:30.852326 | orchestrator | Monday 10 February 2025 09:17:41 +0000 (0:00:00.729) 0:00:35.032 ******* 2025-02-10 09:31:30.852338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f024456c--4135--5029--bf0e--13fb105dc5b7-osd--block--f024456c--4135--5029--bf0e--13fb105dc5b7', 'dm-uuid-LVM-h3ypNuwZWj2S4djDOMdryAWIRBQEd03bLxlUATcvF5FxMKzE3Dd5KLuNVfghAhL4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3ebd317--95a0--5383--a134--14be01baa44d-osd--block--a3ebd317--95a0--5383--a134--14be01baa44d', 'dm-uuid-LVM-yrWaaOsW8g6wWkHwEDVP4bp11l3u7ccCF1PsELKWIgHYSrpkSyx99K1uIWL1F0Kl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f95f397--c0f5--5bc9--9af0--9f577faebed9-osd--block--8f95f397--c0f5--5bc9--9af0--9f577faebed9', 'dm-uuid-LVM-uhI5MWJlMX7QVsgsSfRBdnnDS5EhplIv6LUEclEn4dSHXjMet8gvcOzpJUZXzPv7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--204ceda1--8353--534a--a397--2ce8fe516c0b-osd--block--204ceda1--8353--534a--a397--2ce8fe516c0b', 'dm-uuid-LVM-DBV0ZXNf5Rux7ZKFvL0W1kv5R7eU8F8uRvmLcQfYR9yeelyfE3JT5St3LjN1vmn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f024456c--4135--5029--bf0e--13fb105dc5b7-osd--block--f024456c--4135--5029--bf0e--13fb105dc5b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-98caFU-c1oV-q0at-uThP-j5GP-8Amf-KtSTM5', 'scsi-0QEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598', 'scsi-SQEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c468f1bf--17d5--510b--8602--ed8efc51f14c-osd--block--c468f1bf--17d5--510b--8602--ed8efc51f14c', 'dm-uuid-LVM-a8C2gnTgwcOwFPJA2mm9UewWaXbvd0CLiixcWuVbeZpi0dDnE05g7vE0nBiDyAJ8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a3ebd317--95a0--5383--a134--14be01baa44d-osd--block--a3ebd317--95a0--5383--a134--14be01baa44d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ubigmV-AdE2-nFuE-5Jj2-kBId-NpGc-T88bcC', 'scsi-0QEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a', 'scsi-SQEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b75c92e--4993--5ff3--a16a--a182a58c3e6b-osd--block--9b75c92e--4993--5ff3--a16a--a182a58c3e6b', 'dm-uuid-LVM-RQc7qDSCkwgL9Ynbo467106NyuNKxjkVxZiXie2vTtw4eqcbamkRGKXeBnIB4fIN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51', 'scsi-SQEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part1', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part14', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part15', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part16', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.852983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f2f0c75-1857-43ef-b86a-d1c385559ce2', 'scsi-SQEMU_QEMU_HARDDISK_3f2f0c75-1857-43ef-b86a-d1c385559ce2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.852999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c8bf85e-c93c-4dde-a0b9-becc690957dc', 'scsi-SQEMU_QEMU_HARDDISK_4c8bf85e-c93c-4dde-a0b9-becc690957dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part1', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part14', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part15', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part16', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75df373f-19f7-4c01-b032-3384165fc32e', 'scsi-SQEMU_QEMU_HARDDISK_75df373f-19f7-4c01-b032-3384165fc32e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853104 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.853123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part1', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part14', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part15', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part16', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8f95f397--c0f5--5bc9--9af0--9f577faebed9-osd--block--8f95f397--c0f5--5bc9--9af0--9f577faebed9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bqvcn8-mVqm-BLr0-ANFq-gzac-dC5g-Mq8mV7', 'scsi-0QEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f', 'scsi-SQEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--204ceda1--8353--534a--a397--2ce8fe516c0b-osd--block--204ceda1--8353--534a--a397--2ce8fe516c0b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qC5aBM-meSg-TaTe-C4KK-rMdi-fEdd-SbdWJP', 'scsi-0QEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334', 'scsi-SQEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c468f1bf--17d5--510b--8602--ed8efc51f14c-osd--block--c468f1bf--17d5--510b--8602--ed8efc51f14c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eevd5D-qOUd-EQFp-X6R4-ym4s-XMFN-shAQIW', 'scsi-0QEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06', 'scsi-SQEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c', 'scsi-SQEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b75c92e--4993--5ff3--a16a--a182a58c3e6b-osd--block--9b75c92e--4993--5ff3--a16a--a182a58c3e6b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yleaN1-8e0F-mdX7-rSzw-asqN-R9lE-Re8mng', 'scsi-0QEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a', 'scsi-SQEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92', 'scsi-SQEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946', 'scsi-SQEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part1', 'scsi-SQEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part14', 'scsi-SQEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part15', 'scsi-SQEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part16', 'scsi-SQEMU_QEMU_HARDDISK_232a29e8-485f-4033-b159-19c4e9acd946-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ab0a82a-cefc-4a53-8b35-3a0c471d1d44', 'scsi-SQEMU_QEMU_HARDDISK_1ab0a82a-cefc-4a53-8b35-3a0c471d1d44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_847baef5-49eb-4270-9699-f3453f51c947', 'scsi-SQEMU_QEMU_HARDDISK_847baef5-49eb-4270-9699-f3453f51c947'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0052f5e-c6c0-4052-8cbb-79a9efbad2c5', 'scsi-SQEMU_QEMU_HARDDISK_d0052f5e-c6c0-4052-8cbb-79a9efbad2c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853484 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.853495 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.853505 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.853515 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.853526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:31:30.853627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66', 'scsi-SQEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec7b193c-21f8-4d72-a19d-1fec7ab5cb66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7291aabe-5e3f-438e-8469-36f2cb5c6009', 'scsi-SQEMU_QEMU_HARDDISK_7291aabe-5e3f-438e-8469-36f2cb5c6009'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3635edd1-676b-4d23-b864-ce2187808155', 'scsi-SQEMU_QEMU_HARDDISK_3635edd1-676b-4d23-b864-ce2187808155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c812b351-48e5-4920-9aaa-4a69febb969f', 'scsi-SQEMU_QEMU_HARDDISK_c812b351-48e5-4920-9aaa-4a69febb969f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:31:30.853694 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.853704 | orchestrator | 2025-02-10 09:31:30.853715 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-10 09:31:30.853725 | orchestrator | Monday 10 February 2025 09:17:43 +0000 (0:00:02.024) 0:00:37.057 ******* 2025-02-10 09:31:30.853735 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.853746 | orchestrator | 2025-02-10 09:31:30.853756 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-10 09:31:30.853766 | orchestrator | Monday 10 February 2025 09:17:44 +0000 (0:00:00.871) 0:00:37.928 ******* 2025-02-10 09:31:30.853776 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.853786 | orchestrator | 2025-02-10 09:31:30.853796 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-10 09:31:30.853807 | orchestrator | Monday 10 February 2025 09:17:44 +0000 (0:00:00.178) 0:00:38.106 ******* 2025-02-10 09:31:30.853817 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.853827 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.853837 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.853847 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.853857 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.853867 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.853877 | orchestrator | 2025-02-10 09:31:30.853887 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-10 09:31:30.853923 | orchestrator | Monday 10 February 2025 09:17:46 +0000 (0:00:01.273) 0:00:39.380 ******* 2025-02-10 09:31:30.853942 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.853961 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.853977 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.853993 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.854009 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.854059 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.854070 | orchestrator | 2025-02-10 09:31:30.854080 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-10 09:31:30.854091 | orchestrator | Monday 10 February 2025 09:17:48 +0000 (0:00:02.030) 0:00:41.410 ******* 2025-02-10 09:31:30.854101 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.854111 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.854121 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.854131 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.854141 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.854151 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.854161 | orchestrator | 2025-02-10 09:31:30.854171 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:31:30.854182 | orchestrator | Monday 10 February 2025 09:17:48 +0000 (0:00:00.909) 0:00:42.320 ******* 2025-02-10 09:31:30.854192 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.854210 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.854499 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.854534 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.854545 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.854556 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.854575 | orchestrator | 2025-02-10 09:31:30.854592 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:31:30.854610 | orchestrator | Monday 10 February 2025 09:17:50 +0000 (0:00:01.250) 0:00:43.571 ******* 2025-02-10 09:31:30.854628 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.854647 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.854659 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.854669 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.854679 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.854690 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.854699 | orchestrator | 2025-02-10 09:31:30.854710 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:31:30.854720 | orchestrator | Monday 10 February 2025 09:17:50 +0000 (0:00:00.755) 0:00:44.326 ******* 2025-02-10 09:31:30.854730 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.854740 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.854750 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.854760 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.854770 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.854780 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.854790 | orchestrator | 2025-02-10 09:31:30.854800 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:31:30.854810 | orchestrator | Monday 10 February 2025 09:17:52 +0000 (0:00:01.336) 0:00:45.662 ******* 2025-02-10 09:31:30.854820 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.854830 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.854840 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.854850 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.854860 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.854870 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.854880 | orchestrator | 2025-02-10 09:31:30.854890 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-10 09:31:30.854932 | orchestrator | Monday 10 February 2025 09:17:53 +0000 (0:00:00.923) 0:00:46.585 ******* 2025-02-10 09:31:30.854943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:31:30.854954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:31:30.854964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:31:30.854974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:31:30.854984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:31:30.854994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:31:30.855004 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.855014 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:31:30.855024 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:31:30.855035 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.855045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.855056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:31:30.855507 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.855538 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:31:30.855553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.855569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:31:30.855586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:31:30.855602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.855619 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.855634 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:31:30.855667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:31:30.855685 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.855702 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:31:30.855718 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.855733 | orchestrator | 2025-02-10 09:31:30.855750 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-10 09:31:30.855766 | orchestrator | Monday 10 February 2025 09:17:57 +0000 (0:00:03.796) 0:00:50.382 ******* 2025-02-10 09:31:30.855783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:31:30.855801 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:31:30.855953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:31:30.855970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:31:30.856191 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:31:30.856207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:31:30.856217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:31:30.856227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:31:30.856237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.856247 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.856258 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:31:30.856268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.856278 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.856288 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.856298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:31:30.856308 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:31:30.856320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.856337 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.856454 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:31:30.856476 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:31:30.856487 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.856497 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:31:30.856508 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:31:30.856518 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.856538 | orchestrator | 2025-02-10 09:31:30.856549 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-10 09:31:30.856566 | orchestrator | Monday 10 February 2025 09:18:00 +0000 (0:00:03.188) 0:00:53.570 ******* 2025-02-10 09:31:30.856583 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:31:30.856604 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-10 09:31:30.856620 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:31:30.856752 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:31:30.856775 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-10 09:31:30.856792 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-02-10 09:31:30.857072 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-10 09:31:30.857093 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-02-10 09:31:30.857104 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:31:30.857145 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-10 09:31:30.857158 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:31:30.857223 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-02-10 09:31:30.857234 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-10 09:31:30.857259 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-02-10 09:31:30.857270 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-10 09:31:30.857281 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-02-10 09:31:30.857292 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:31:30.857303 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-02-10 09:31:30.857314 | orchestrator | 2025-02-10 09:31:30.857325 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-10 09:31:30.857336 | orchestrator | Monday 10 February 2025 09:18:06 +0000 (0:00:06.233) 0:00:59.803 ******* 2025-02-10 09:31:30.857348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:31:30.857416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:31:30.857547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:31:30.857561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:31:30.857572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:31:30.857583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:31:30.857594 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.857606 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:31:30.857617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:31:30.857627 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:31:30.857638 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.857649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.857706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.857719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.857730 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.857742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:31:30.857753 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:31:30.857765 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:31:30.857776 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.857788 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.857799 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:31:30.857810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:31:30.858213 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:31:30.858230 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.858242 | orchestrator | 2025-02-10 09:31:30.858253 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-10 09:31:30.858264 | orchestrator | Monday 10 February 2025 09:18:07 +0000 (0:00:01.360) 0:01:01.164 ******* 2025-02-10 09:31:30.858275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:31:30.858286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:31:30.858297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:31:30.858307 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.858319 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:31:30.858329 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:31:30.858347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:31:30.858359 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:31:30.858369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:31:30.858380 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.858391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:31:30.858402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.858413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.858434 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.858683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.858709 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-10 09:31:30.858725 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-10 09:31:30.858741 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.858757 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-10 09:31:30.858774 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.858792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-10 09:31:30.858802 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-10 09:31:30.858814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-10 09:31:30.858831 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.858858 | orchestrator | 2025-02-10 09:31:30.859023 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-10 09:31:30.859044 | orchestrator | Monday 10 February 2025 09:18:08 +0000 (0:00:00.713) 0:01:01.878 ******* 2025-02-10 09:31:30.859309 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:31:30.859325 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:31:30.859334 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:31:30.859343 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:31:30.859352 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:31:30.859361 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:31:30.859369 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.859378 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:31:30.859387 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:31:30.859395 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:31:30.859404 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.859413 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-10 09:31:30.859422 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:31:30.859430 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:31:30.859439 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.859447 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:31:30.859456 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-02-10 09:31:30.859465 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:31:30.859473 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:31:30.859482 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:31:30.859490 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-02-10 09:31:30.859499 | orchestrator | 2025-02-10 09:31:30.859507 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-10 09:31:30.859516 | orchestrator | Monday 10 February 2025 09:18:09 +0000 (0:00:01.136) 0:01:03.015 ******* 2025-02-10 09:31:30.859525 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.859546 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.859565 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.859574 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.859583 | orchestrator | 2025-02-10 09:31:30.859591 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.859601 | orchestrator | Monday 10 February 2025 09:18:10 +0000 (0:00:01.042) 0:01:04.057 ******* 2025-02-10 09:31:30.859610 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.859618 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.859627 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.859635 | orchestrator | 2025-02-10 09:31:30.859644 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.859653 | orchestrator | Monday 10 February 2025 09:18:11 +0000 (0:00:01.014) 0:01:05.072 ******* 2025-02-10 09:31:30.859661 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.859669 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.859678 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.859687 | orchestrator | 2025-02-10 09:31:30.859798 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.859808 | orchestrator | Monday 10 February 2025 09:18:12 +0000 (0:00:00.702) 0:01:05.775 ******* 2025-02-10 09:31:30.859817 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.859826 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.859835 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.859844 | orchestrator | 2025-02-10 09:31:30.859853 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.859862 | orchestrator | Monday 10 February 2025 09:18:13 +0000 (0:00:00.990) 0:01:06.765 ******* 2025-02-10 09:31:30.859871 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.859970 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.859992 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.860006 | orchestrator | 2025-02-10 09:31:30.860020 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.860034 | orchestrator | Monday 10 February 2025 09:18:14 +0000 (0:00:01.249) 0:01:08.015 ******* 2025-02-10 09:31:30.860048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.860062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.860076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.860090 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860105 | orchestrator | 2025-02-10 09:31:30.860119 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.860134 | orchestrator | Monday 10 February 2025 09:18:15 +0000 (0:00:00.937) 0:01:08.953 ******* 2025-02-10 09:31:30.860143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.860152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.860161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.860169 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860178 | orchestrator | 2025-02-10 09:31:30.860187 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.860195 | orchestrator | Monday 10 February 2025 09:18:15 +0000 (0:00:00.382) 0:01:09.335 ******* 2025-02-10 09:31:30.860204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.860212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.860221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.860229 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860238 | orchestrator | 2025-02-10 09:31:30.860247 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.860255 | orchestrator | Monday 10 February 2025 09:18:16 +0000 (0:00:00.435) 0:01:09.771 ******* 2025-02-10 09:31:30.860275 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.860283 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.860292 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.860300 | orchestrator | 2025-02-10 09:31:30.860309 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.860323 | orchestrator | Monday 10 February 2025 09:18:16 +0000 (0:00:00.567) 0:01:10.339 ******* 2025-02-10 09:31:30.860332 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-10 09:31:30.860340 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:31:30.860349 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:31:30.860357 | orchestrator | 2025-02-10 09:31:30.860366 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.860374 | orchestrator | Monday 10 February 2025 09:18:18 +0000 (0:00:01.849) 0:01:12.188 ******* 2025-02-10 09:31:30.860383 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860391 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.860400 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.860409 | orchestrator | 2025-02-10 09:31:30.860417 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.860426 | orchestrator | Monday 10 February 2025 09:18:19 +0000 (0:00:00.565) 0:01:12.753 ******* 2025-02-10 09:31:30.860434 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860442 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.860451 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.860459 | orchestrator | 2025-02-10 09:31:30.860468 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.860476 | orchestrator | Monday 10 February 2025 09:18:20 +0000 (0:00:00.595) 0:01:13.348 ******* 2025-02-10 09:31:30.860485 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.860494 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860502 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.860511 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.860520 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.860528 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.860536 | orchestrator | 2025-02-10 09:31:30.860545 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.860554 | orchestrator | Monday 10 February 2025 09:18:20 +0000 (0:00:00.869) 0:01:14.218 ******* 2025-02-10 09:31:30.860563 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.860572 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860585 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.860599 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.860613 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.860624 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.860633 | orchestrator | 2025-02-10 09:31:30.860643 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.860652 | orchestrator | Monday 10 February 2025 09:18:21 +0000 (0:00:00.653) 0:01:14.871 ******* 2025-02-10 09:31:30.860662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.860671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.860681 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.860690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.860700 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.860709 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.860801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.860815 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.860825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.860834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.860843 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.860853 | orchestrator | 2025-02-10 09:31:30.860863 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-10 09:31:30.860873 | orchestrator | Monday 10 February 2025 09:18:22 +0000 (0:00:00.827) 0:01:15.699 ******* 2025-02-10 09:31:30.860882 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.860892 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.860923 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.860933 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.860942 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.860951 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.860959 | orchestrator | 2025-02-10 09:31:30.860971 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-10 09:31:30.860985 | orchestrator | Monday 10 February 2025 09:18:23 +0000 (0:00:01.281) 0:01:16.981 ******* 2025-02-10 09:31:30.860999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:31:30.861013 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:31:30.861027 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:31:30.861041 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-10 09:31:30.861054 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:31:30.861069 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:31:30.861084 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:31:30.861093 | orchestrator | 2025-02-10 09:31:30.861102 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-10 09:31:30.861110 | orchestrator | Monday 10 February 2025 09:18:24 +0000 (0:00:00.947) 0:01:17.928 ******* 2025-02-10 09:31:30.861121 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:31:30.861135 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:31:30.861149 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:31:30.861162 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-10 09:31:30.861177 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:31:30.861191 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:31:30.861206 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:31:30.861219 | orchestrator | 2025-02-10 09:31:30.861233 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.861248 | orchestrator | Monday 10 February 2025 09:18:27 +0000 (0:00:02.740) 0:01:20.668 ******* 2025-02-10 09:31:30.861264 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.861276 | orchestrator | 2025-02-10 09:31:30.861285 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.861300 | orchestrator | Monday 10 February 2025 09:18:29 +0000 (0:00:01.912) 0:01:22.581 ******* 2025-02-10 09:31:30.861309 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.861317 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.861345 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.861362 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.861371 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.861384 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.861399 | orchestrator | 2025-02-10 09:31:30.861413 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.861426 | orchestrator | Monday 10 February 2025 09:18:30 +0000 (0:00:01.330) 0:01:23.911 ******* 2025-02-10 09:31:30.861441 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.861461 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.861476 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.861489 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.861498 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.861506 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.861515 | orchestrator | 2025-02-10 09:31:30.861523 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.861532 | orchestrator | Monday 10 February 2025 09:18:31 +0000 (0:00:01.151) 0:01:25.062 ******* 2025-02-10 09:31:30.861540 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.861549 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.861557 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.861566 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.861574 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.861582 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.861591 | orchestrator | 2025-02-10 09:31:30.861600 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.861608 | orchestrator | Monday 10 February 2025 09:18:32 +0000 (0:00:00.861) 0:01:25.924 ******* 2025-02-10 09:31:30.861617 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.861625 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.861634 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.861642 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.861655 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.861669 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.861682 | orchestrator | 2025-02-10 09:31:30.861697 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.861804 | orchestrator | Monday 10 February 2025 09:18:33 +0000 (0:00:00.862) 0:01:26.786 ******* 2025-02-10 09:31:30.861824 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.861833 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.861841 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.861850 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.861858 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.861867 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.861876 | orchestrator | 2025-02-10 09:31:30.861884 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.861939 | orchestrator | Monday 10 February 2025 09:18:34 +0000 (0:00:00.951) 0:01:27.738 ******* 2025-02-10 09:31:30.861952 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.861961 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.861969 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.861978 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.861986 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.861995 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862003 | orchestrator | 2025-02-10 09:31:30.862033 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.862044 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:00.937) 0:01:28.676 ******* 2025-02-10 09:31:30.862053 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.862061 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.862070 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.862078 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862086 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862095 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862103 | orchestrator | 2025-02-10 09:31:30.862112 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.862131 | orchestrator | Monday 10 February 2025 09:18:35 +0000 (0:00:00.628) 0:01:29.304 ******* 2025-02-10 09:31:30.862139 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.862148 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.862156 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.862165 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862173 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862182 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862190 | orchestrator | 2025-02-10 09:31:30.862198 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.862207 | orchestrator | Monday 10 February 2025 09:18:36 +0000 (0:00:00.922) 0:01:30.227 ******* 2025-02-10 09:31:30.862216 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.862231 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.862246 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.862260 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862273 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862286 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862300 | orchestrator | 2025-02-10 09:31:30.862314 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.862328 | orchestrator | Monday 10 February 2025 09:18:37 +0000 (0:00:00.669) 0:01:30.897 ******* 2025-02-10 09:31:30.862343 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.862355 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.862363 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.862372 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862386 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862395 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862404 | orchestrator | 2025-02-10 09:31:30.862413 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.862422 | orchestrator | Monday 10 February 2025 09:18:38 +0000 (0:00:00.750) 0:01:31.647 ******* 2025-02-10 09:31:30.862431 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.862440 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.862449 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.862458 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.862467 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.862476 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.862485 | orchestrator | 2025-02-10 09:31:30.862494 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.862503 | orchestrator | Monday 10 February 2025 09:18:39 +0000 (0:00:01.233) 0:01:32.881 ******* 2025-02-10 09:31:30.862511 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.862519 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.862527 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.862535 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862543 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862551 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862558 | orchestrator | 2025-02-10 09:31:30.862566 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.862574 | orchestrator | Monday 10 February 2025 09:18:40 +0000 (0:00:01.017) 0:01:33.898 ******* 2025-02-10 09:31:30.862582 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.862590 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.862598 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.862606 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.862614 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.862622 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.862629 | orchestrator | 2025-02-10 09:31:30.862637 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.862645 | orchestrator | Monday 10 February 2025 09:18:41 +0000 (0:00:00.863) 0:01:34.762 ******* 2025-02-10 09:31:30.862653 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.862673 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.862687 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.862699 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862713 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862726 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862735 | orchestrator | 2025-02-10 09:31:30.862742 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.862751 | orchestrator | Monday 10 February 2025 09:18:42 +0000 (0:00:00.849) 0:01:35.612 ******* 2025-02-10 09:31:30.862758 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.862766 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.862774 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.862783 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.862796 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.862810 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.862822 | orchestrator | 2025-02-10 09:31:30.862944 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.862966 | orchestrator | Monday 10 February 2025 09:18:42 +0000 (0:00:00.672) 0:01:36.285 ******* 2025-02-10 09:31:30.862975 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.862983 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.862991 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.862999 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863007 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863015 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863023 | orchestrator | 2025-02-10 09:31:30.863032 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.863046 | orchestrator | Monday 10 February 2025 09:18:43 +0000 (0:00:00.788) 0:01:37.073 ******* 2025-02-10 09:31:30.863054 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863063 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863071 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863082 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863095 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863108 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863121 | orchestrator | 2025-02-10 09:31:30.863134 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.863148 | orchestrator | Monday 10 February 2025 09:18:44 +0000 (0:00:00.633) 0:01:37.706 ******* 2025-02-10 09:31:30.863161 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863174 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863183 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863191 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863199 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863207 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863215 | orchestrator | 2025-02-10 09:31:30.863223 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.863231 | orchestrator | Monday 10 February 2025 09:18:45 +0000 (0:00:01.018) 0:01:38.725 ******* 2025-02-10 09:31:30.863239 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863247 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863260 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863269 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.863277 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.863285 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.863292 | orchestrator | 2025-02-10 09:31:30.863301 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.863309 | orchestrator | Monday 10 February 2025 09:18:46 +0000 (0:00:00.725) 0:01:39.451 ******* 2025-02-10 09:31:30.863317 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.863325 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.863332 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.863341 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.863354 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.863376 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.863389 | orchestrator | 2025-02-10 09:31:30.863401 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.863438 | orchestrator | Monday 10 February 2025 09:18:47 +0000 (0:00:00.992) 0:01:40.444 ******* 2025-02-10 09:31:30.863450 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863458 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863467 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863480 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863494 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863506 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863520 | orchestrator | 2025-02-10 09:31:30.863534 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.863547 | orchestrator | Monday 10 February 2025 09:18:47 +0000 (0:00:00.860) 0:01:41.304 ******* 2025-02-10 09:31:30.863556 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863565 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863574 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863583 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863592 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863601 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863610 | orchestrator | 2025-02-10 09:31:30.863618 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.863628 | orchestrator | Monday 10 February 2025 09:18:49 +0000 (0:00:01.421) 0:01:42.726 ******* 2025-02-10 09:31:30.863637 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863645 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863654 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863663 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863671 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863680 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863689 | orchestrator | 2025-02-10 09:31:30.863698 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.863707 | orchestrator | Monday 10 February 2025 09:18:50 +0000 (0:00:01.091) 0:01:43.817 ******* 2025-02-10 09:31:30.863715 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863724 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863733 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863742 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863751 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863760 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863769 | orchestrator | 2025-02-10 09:31:30.863778 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.863787 | orchestrator | Monday 10 February 2025 09:18:51 +0000 (0:00:01.399) 0:01:45.217 ******* 2025-02-10 09:31:30.863795 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863804 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863813 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.863821 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.863830 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.863839 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.863847 | orchestrator | 2025-02-10 09:31:30.863856 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.863865 | orchestrator | Monday 10 February 2025 09:18:52 +0000 (0:00:00.850) 0:01:46.067 ******* 2025-02-10 09:31:30.863874 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.863888 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.863923 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864025 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864048 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864062 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864075 | orchestrator | 2025-02-10 09:31:30.864088 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.864119 | orchestrator | Monday 10 February 2025 09:18:53 +0000 (0:00:01.074) 0:01:47.141 ******* 2025-02-10 09:31:30.864134 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864142 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864150 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864158 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864172 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864180 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864188 | orchestrator | 2025-02-10 09:31:30.864196 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.864205 | orchestrator | Monday 10 February 2025 09:18:54 +0000 (0:00:00.944) 0:01:48.086 ******* 2025-02-10 09:31:30.864213 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864221 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864229 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864237 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864244 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864252 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864260 | orchestrator | 2025-02-10 09:31:30.864268 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.864276 | orchestrator | Monday 10 February 2025 09:18:55 +0000 (0:00:01.243) 0:01:49.329 ******* 2025-02-10 09:31:30.864284 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864292 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864300 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864308 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864316 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864323 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864332 | orchestrator | 2025-02-10 09:31:30.864341 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.864354 | orchestrator | Monday 10 February 2025 09:18:56 +0000 (0:00:00.943) 0:01:50.273 ******* 2025-02-10 09:31:30.864369 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864382 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864395 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864409 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864420 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864428 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864436 | orchestrator | 2025-02-10 09:31:30.864447 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.864461 | orchestrator | Monday 10 February 2025 09:18:57 +0000 (0:00:00.995) 0:01:51.268 ******* 2025-02-10 09:31:30.864474 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864487 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864499 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864510 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864524 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864537 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864550 | orchestrator | 2025-02-10 09:31:30.864564 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.864577 | orchestrator | Monday 10 February 2025 09:18:58 +0000 (0:00:00.701) 0:01:51.970 ******* 2025-02-10 09:31:30.864590 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864603 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864615 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864628 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864641 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864654 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864667 | orchestrator | 2025-02-10 09:31:30.864681 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.864695 | orchestrator | Monday 10 February 2025 09:18:59 +0000 (0:00:00.954) 0:01:52.924 ******* 2025-02-10 09:31:30.864720 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.864735 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.864748 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.864763 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.864777 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.864791 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.864805 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.864818 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.864832 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.864846 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.864859 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.864874 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.864888 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.864922 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.864936 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.864949 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.864963 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.864977 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.864990 | orchestrator | 2025-02-10 09:31:30.865003 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.865017 | orchestrator | Monday 10 February 2025 09:19:00 +0000 (0:00:00.842) 0:01:53.767 ******* 2025-02-10 09:31:30.865030 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:31:30.865043 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:31:30.865055 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.865067 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:31:30.865080 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:31:30.865092 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.865217 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:31:30.865242 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:31:30.865255 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.865267 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:31:30.865280 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:31:30.865293 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.865307 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:31:30.865320 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:31:30.865333 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.865346 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:31:30.865359 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:31:30.865372 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.865385 | orchestrator | 2025-02-10 09:31:30.865399 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.865412 | orchestrator | Monday 10 February 2025 09:19:01 +0000 (0:00:01.037) 0:01:54.804 ******* 2025-02-10 09:31:30.865426 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.865438 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.865452 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.865460 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.865469 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.865477 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.865485 | orchestrator | 2025-02-10 09:31:30.865494 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.865502 | orchestrator | Monday 10 February 2025 09:19:02 +0000 (0:00:00.654) 0:01:55.458 ******* 2025-02-10 09:31:30.865520 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.865535 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.865543 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.865551 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.865560 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.865570 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.865583 | orchestrator | 2025-02-10 09:31:30.865596 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.865610 | orchestrator | Monday 10 February 2025 09:19:02 +0000 (0:00:00.769) 0:01:56.228 ******* 2025-02-10 09:31:30.865622 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.865634 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.865646 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.865659 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.865672 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.865685 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.865699 | orchestrator | 2025-02-10 09:31:30.865713 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.865726 | orchestrator | Monday 10 February 2025 09:19:03 +0000 (0:00:00.576) 0:01:56.804 ******* 2025-02-10 09:31:30.865739 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.865752 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.865766 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.865774 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.865782 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.865790 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.865798 | orchestrator | 2025-02-10 09:31:30.865806 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.865814 | orchestrator | Monday 10 February 2025 09:19:04 +0000 (0:00:00.713) 0:01:57.518 ******* 2025-02-10 09:31:30.865822 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.865832 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.865840 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.865849 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.865858 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.865941 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.865960 | orchestrator | 2025-02-10 09:31:30.865974 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.865988 | orchestrator | Monday 10 February 2025 09:19:04 +0000 (0:00:00.514) 0:01:58.033 ******* 2025-02-10 09:31:30.866001 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866039 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866055 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866069 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866082 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.866095 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.866108 | orchestrator | 2025-02-10 09:31:30.866122 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.866135 | orchestrator | Monday 10 February 2025 09:19:05 +0000 (0:00:00.653) 0:01:58.687 ******* 2025-02-10 09:31:30.866149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.866161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.866173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.866185 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866196 | orchestrator | 2025-02-10 09:31:30.866208 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.866219 | orchestrator | Monday 10 February 2025 09:19:05 +0000 (0:00:00.375) 0:01:59.062 ******* 2025-02-10 09:31:30.866230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.866241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.866264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.866276 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866293 | orchestrator | 2025-02-10 09:31:30.866305 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.866316 | orchestrator | Monday 10 February 2025 09:19:06 +0000 (0:00:00.414) 0:01:59.476 ******* 2025-02-10 09:31:30.866328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.866432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.866446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.866454 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866461 | orchestrator | 2025-02-10 09:31:30.866468 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.866475 | orchestrator | Monday 10 February 2025 09:19:06 +0000 (0:00:00.428) 0:01:59.904 ******* 2025-02-10 09:31:30.866482 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866489 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866496 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866503 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866510 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.866517 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.866524 | orchestrator | 2025-02-10 09:31:30.866531 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.866538 | orchestrator | Monday 10 February 2025 09:19:07 +0000 (0:00:00.672) 0:02:00.577 ******* 2025-02-10 09:31:30.866546 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.866553 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866560 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.866567 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866574 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.866581 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866588 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.866596 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866603 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.866609 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.866617 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.866623 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.866631 | orchestrator | 2025-02-10 09:31:30.866638 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.866645 | orchestrator | Monday 10 February 2025 09:19:08 +0000 (0:00:01.291) 0:02:01.868 ******* 2025-02-10 09:31:30.866652 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866658 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866665 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866672 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866679 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.866686 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.866693 | orchestrator | 2025-02-10 09:31:30.866700 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.866707 | orchestrator | Monday 10 February 2025 09:19:09 +0000 (0:00:00.860) 0:02:02.729 ******* 2025-02-10 09:31:30.866714 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866726 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866733 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866740 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866747 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.866753 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.866760 | orchestrator | 2025-02-10 09:31:30.866768 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.866775 | orchestrator | Monday 10 February 2025 09:19:10 +0000 (0:00:00.986) 0:02:03.715 ******* 2025-02-10 09:31:30.866788 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.866795 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866802 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.866809 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866816 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.866823 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866834 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.866842 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866849 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.866856 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.866863 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.866870 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.866877 | orchestrator | 2025-02-10 09:31:30.866884 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.866891 | orchestrator | Monday 10 February 2025 09:19:11 +0000 (0:00:00.984) 0:02:04.700 ******* 2025-02-10 09:31:30.866920 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.866933 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.866941 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.866951 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.866962 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.866972 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.866984 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.866996 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.867007 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.867019 | orchestrator | 2025-02-10 09:31:30.867028 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.867035 | orchestrator | Monday 10 February 2025 09:19:12 +0000 (0:00:01.357) 0:02:06.058 ******* 2025-02-10 09:31:30.867042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.867096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.867109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.867120 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.867130 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.867141 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.867229 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.867247 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.867259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.867271 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.867278 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.867286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.867293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.867300 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.867307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.867314 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:31:30.867321 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:31:30.867328 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:31:30.867335 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.867342 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.867349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:31:30.867377 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:31:30.867384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:31:30.867391 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.867398 | orchestrator | 2025-02-10 09:31:30.867405 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.867416 | orchestrator | Monday 10 February 2025 09:19:15 +0000 (0:00:02.553) 0:02:08.611 ******* 2025-02-10 09:31:30.867424 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.867431 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.867438 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.867447 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.867456 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.867463 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.867470 | orchestrator | 2025-02-10 09:31:30.867477 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.867484 | orchestrator | Monday 10 February 2025 09:19:16 +0000 (0:00:01.537) 0:02:10.148 ******* 2025-02-10 09:31:30.867492 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.867499 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.867506 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.867513 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.867520 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.867527 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.867534 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.867540 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.867547 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.867554 | orchestrator | 2025-02-10 09:31:30.867561 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.867568 | orchestrator | Monday 10 February 2025 09:19:18 +0000 (0:00:01.524) 0:02:11.673 ******* 2025-02-10 09:31:30.867575 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.867582 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.867589 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.867596 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.867602 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.867609 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.867616 | orchestrator | 2025-02-10 09:31:30.867623 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.867630 | orchestrator | Monday 10 February 2025 09:19:19 +0000 (0:00:01.469) 0:02:13.143 ******* 2025-02-10 09:31:30.867637 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.867644 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.867651 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.867658 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.867664 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.867671 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.867695 | orchestrator | 2025-02-10 09:31:30.867702 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-02-10 09:31:30.867709 | orchestrator | Monday 10 February 2025 09:19:21 +0000 (0:00:01.512) 0:02:14.656 ******* 2025-02-10 09:31:30.867716 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.867723 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.867730 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.867737 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.867744 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.867759 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.867767 | orchestrator | 2025-02-10 09:31:30.867774 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-02-10 09:31:30.867781 | orchestrator | Monday 10 February 2025 09:19:23 +0000 (0:00:02.078) 0:02:16.734 ******* 2025-02-10 09:31:30.867788 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.867801 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.867808 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.867815 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.867822 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.867829 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.867835 | orchestrator | 2025-02-10 09:31:30.867842 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-02-10 09:31:30.867850 | orchestrator | Monday 10 February 2025 09:19:26 +0000 (0:00:02.703) 0:02:19.438 ******* 2025-02-10 09:31:30.867857 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.867866 | orchestrator | 2025-02-10 09:31:30.867872 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-02-10 09:31:30.867879 | orchestrator | Monday 10 February 2025 09:19:27 +0000 (0:00:01.433) 0:02:20.872 ******* 2025-02-10 09:31:30.867886 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.867909 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.867917 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.867982 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.867993 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868001 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868008 | orchestrator | 2025-02-10 09:31:30.868016 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-02-10 09:31:30.868024 | orchestrator | Monday 10 February 2025 09:19:28 +0000 (0:00:00.712) 0:02:21.585 ******* 2025-02-10 09:31:30.868031 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868039 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.868047 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.868054 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.868062 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868069 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868077 | orchestrator | 2025-02-10 09:31:30.868084 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-02-10 09:31:30.868092 | orchestrator | Monday 10 February 2025 09:19:29 +0000 (0:00:01.101) 0:02:22.686 ******* 2025-02-10 09:31:30.868100 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:31:30.868107 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:31:30.868115 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:31:30.868123 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:31:30.868130 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:31:30.868138 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:31:30.868146 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:31:30.868154 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:31:30.868161 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-10 09:31:30.868169 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:31:30.868176 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:31:30.868184 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-10 09:31:30.868191 | orchestrator | 2025-02-10 09:31:30.868199 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-02-10 09:31:30.868212 | orchestrator | Monday 10 February 2025 09:19:31 +0000 (0:00:01.939) 0:02:24.625 ******* 2025-02-10 09:31:30.868219 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.868227 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.868240 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.868247 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.868255 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.868263 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.868270 | orchestrator | 2025-02-10 09:31:30.868278 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-02-10 09:31:30.868285 | orchestrator | Monday 10 February 2025 09:19:32 +0000 (0:00:01.406) 0:02:26.032 ******* 2025-02-10 09:31:30.868293 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868301 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.868308 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.868316 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.868323 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868331 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868338 | orchestrator | 2025-02-10 09:31:30.868346 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-02-10 09:31:30.868354 | orchestrator | Monday 10 February 2025 09:19:33 +0000 (0:00:01.279) 0:02:27.312 ******* 2025-02-10 09:31:30.868361 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868369 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.868377 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.868384 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.868392 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868399 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868407 | orchestrator | 2025-02-10 09:31:30.868415 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-02-10 09:31:30.868422 | orchestrator | Monday 10 February 2025 09:19:34 +0000 (0:00:00.899) 0:02:28.212 ******* 2025-02-10 09:31:30.868430 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.868438 | orchestrator | 2025-02-10 09:31:30.868446 | orchestrator | TASK [ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy image] *** 2025-02-10 09:31:30.868453 | orchestrator | Monday 10 February 2025 09:19:36 +0000 (0:00:01.828) 0:02:30.040 ******* 2025-02-10 09:31:30.868461 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.868469 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.868477 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.868484 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.868492 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.868499 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.868507 | orchestrator | 2025-02-10 09:31:30.868515 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-02-10 09:31:30.868523 | orchestrator | Monday 10 February 2025 09:20:00 +0000 (0:00:23.529) 0:02:53.570 ******* 2025-02-10 09:31:30.868530 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:31:30.868538 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:31:30.868546 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:31:30.868553 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868601 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:31:30.868612 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:31:30.868619 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:31:30.868627 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.868634 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:31:30.868642 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:31:30.868649 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:31:30.868662 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.868669 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:31:30.868676 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:31:30.868683 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:31:30.868690 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.868697 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:31:30.868704 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:31:30.868711 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:31:30.868718 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868725 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-10 09:31:30.868732 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-10 09:31:30.868739 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-10 09:31:30.868745 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868752 | orchestrator | 2025-02-10 09:31:30.868759 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-02-10 09:31:30.868766 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.895) 0:02:54.466 ******* 2025-02-10 09:31:30.868773 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868784 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.868791 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.868800 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.868811 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868822 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868833 | orchestrator | 2025-02-10 09:31:30.868844 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-02-10 09:31:30.868854 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.717) 0:02:55.184 ******* 2025-02-10 09:31:30.868865 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868875 | orchestrator | 2025-02-10 09:31:30.868891 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-02-10 09:31:30.868916 | orchestrator | Monday 10 February 2025 09:20:01 +0000 (0:00:00.118) 0:02:55.302 ******* 2025-02-10 09:31:30.868926 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.868937 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.868948 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.868959 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.868970 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.868981 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.868992 | orchestrator | 2025-02-10 09:31:30.868999 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-02-10 09:31:30.869006 | orchestrator | Monday 10 February 2025 09:20:02 +0000 (0:00:00.817) 0:02:56.120 ******* 2025-02-10 09:31:30.869013 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869020 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869026 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869033 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869040 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869047 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869054 | orchestrator | 2025-02-10 09:31:30.869060 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-02-10 09:31:30.869067 | orchestrator | Monday 10 February 2025 09:20:03 +0000 (0:00:00.707) 0:02:56.828 ******* 2025-02-10 09:31:30.869074 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869081 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869088 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869095 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869107 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869114 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869121 | orchestrator | 2025-02-10 09:31:30.869128 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-02-10 09:31:30.869135 | orchestrator | Monday 10 February 2025 09:20:04 +0000 (0:00:01.043) 0:02:57.871 ******* 2025-02-10 09:31:30.869142 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.869149 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.869156 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.869163 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.869169 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.869176 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.869183 | orchestrator | 2025-02-10 09:31:30.869190 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-02-10 09:31:30.869197 | orchestrator | Monday 10 February 2025 09:20:07 +0000 (0:00:02.814) 0:03:00.686 ******* 2025-02-10 09:31:30.869204 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.869211 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.869217 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.869224 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.869231 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.869238 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.869246 | orchestrator | 2025-02-10 09:31:30.869253 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-02-10 09:31:30.869261 | orchestrator | Monday 10 February 2025 09:20:08 +0000 (0:00:01.049) 0:03:01.735 ******* 2025-02-10 09:31:30.869322 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.869335 | orchestrator | 2025-02-10 09:31:30.869343 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-02-10 09:31:30.869351 | orchestrator | Monday 10 February 2025 09:20:09 +0000 (0:00:01.388) 0:03:03.123 ******* 2025-02-10 09:31:30.869358 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869366 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869391 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869400 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869408 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869416 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869424 | orchestrator | 2025-02-10 09:31:30.869433 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-02-10 09:31:30.869441 | orchestrator | Monday 10 February 2025 09:20:10 +0000 (0:00:00.847) 0:03:03.970 ******* 2025-02-10 09:31:30.869450 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869458 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869466 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869475 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869483 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869491 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869499 | orchestrator | 2025-02-10 09:31:30.869507 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-02-10 09:31:30.869516 | orchestrator | Monday 10 February 2025 09:20:11 +0000 (0:00:01.145) 0:03:05.115 ******* 2025-02-10 09:31:30.869524 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869532 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869540 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869552 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869560 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869569 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869576 | orchestrator | 2025-02-10 09:31:30.869585 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-02-10 09:31:30.869593 | orchestrator | Monday 10 February 2025 09:20:12 +0000 (0:00:00.975) 0:03:06.091 ******* 2025-02-10 09:31:30.869602 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869616 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869623 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869631 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869638 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869645 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869652 | orchestrator | 2025-02-10 09:31:30.869660 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-02-10 09:31:30.869667 | orchestrator | Monday 10 February 2025 09:20:14 +0000 (0:00:01.365) 0:03:07.456 ******* 2025-02-10 09:31:30.869674 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869682 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869689 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869696 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869704 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869711 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869718 | orchestrator | 2025-02-10 09:31:30.869726 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-02-10 09:31:30.869733 | orchestrator | Monday 10 February 2025 09:20:14 +0000 (0:00:00.747) 0:03:08.204 ******* 2025-02-10 09:31:30.869740 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869747 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869755 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869762 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869769 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869777 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869784 | orchestrator | 2025-02-10 09:31:30.869791 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-02-10 09:31:30.869799 | orchestrator | Monday 10 February 2025 09:20:15 +0000 (0:00:00.987) 0:03:09.191 ******* 2025-02-10 09:31:30.869806 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.869813 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.869820 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.869828 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.869835 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.869842 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.869850 | orchestrator | 2025-02-10 09:31:30.869857 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-02-10 09:31:30.869864 | orchestrator | Monday 10 February 2025 09:20:16 +0000 (0:00:00.733) 0:03:09.924 ******* 2025-02-10 09:31:30.869872 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.869879 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.869886 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.869942 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.869957 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.869969 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.869980 | orchestrator | 2025-02-10 09:31:30.869990 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.870001 | orchestrator | Monday 10 February 2025 09:20:18 +0000 (0:00:01.558) 0:03:11.483 ******* 2025-02-10 09:31:30.870037 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.870053 | orchestrator | 2025-02-10 09:31:30.870064 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-02-10 09:31:30.870081 | orchestrator | Monday 10 February 2025 09:20:19 +0000 (0:00:01.562) 0:03:13.046 ******* 2025-02-10 09:31:30.870092 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-02-10 09:31:30.870104 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-02-10 09:31:30.870117 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-02-10 09:31:30.870129 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-02-10 09:31:30.870141 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-02-10 09:31:30.870148 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-02-10 09:31:30.870224 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-02-10 09:31:30.870233 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-02-10 09:31:30.870239 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-02-10 09:31:30.870246 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-02-10 09:31:30.870252 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-02-10 09:31:30.870258 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-02-10 09:31:30.870265 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-02-10 09:31:30.870271 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-02-10 09:31:30.870278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-02-10 09:31:30.870284 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-02-10 09:31:30.870290 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-02-10 09:31:30.870297 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-02-10 09:31:30.870303 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-02-10 09:31:30.870310 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-02-10 09:31:30.870316 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-02-10 09:31:30.870322 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-02-10 09:31:30.870328 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-02-10 09:31:30.870334 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-02-10 09:31:30.870340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-02-10 09:31:30.870346 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-02-10 09:31:30.870352 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-02-10 09:31:30.870363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-02-10 09:31:30.870369 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-02-10 09:31:30.870375 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:31:30.870381 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-02-10 09:31:30.870387 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-02-10 09:31:30.870394 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-02-10 09:31:30.870400 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-02-10 09:31:30.870406 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:31:30.870412 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:31:30.870418 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:31:30.870428 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:31:30.870434 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-02-10 09:31:30.870440 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-02-10 09:31:30.870446 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:31:30.870453 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:31:30.870459 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:31:30.870465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:31:30.870471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:31:30.870477 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-02-10 09:31:30.870483 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:31:30.870490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:31:30.870496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:31:30.870507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:31:30.870513 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:31:30.870520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-02-10 09:31:30.870526 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:31:30.870532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:31:30.870538 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:31:30.870544 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:31:30.870550 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:31:30.870556 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-10 09:31:30.870563 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:31:30.870569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:31:30.870575 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:31:30.870581 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:31:30.870587 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:31:30.870593 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:31:30.870606 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-10 09:31:30.870649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:31:30.870658 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:31:30.870664 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:31:30.870670 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:31:30.870676 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:31:30.870682 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-10 09:31:30.870689 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-02-10 09:31:30.870695 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:31:30.870701 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:31:30.870707 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:31:30.870713 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-02-10 09:31:30.870720 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-10 09:31:30.870726 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-02-10 09:31:30.870732 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-02-10 09:31:30.870738 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-02-10 09:31:30.870744 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:31:30.870750 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-02-10 09:31:30.870756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-10 09:31:30.870763 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-02-10 09:31:30.870769 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-02-10 09:31:30.870775 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-02-10 09:31:30.870781 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-10 09:31:30.870787 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-02-10 09:31:30.870793 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-02-10 09:31:30.870804 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-02-10 09:31:30.870823 | orchestrator | 2025-02-10 09:31:30.870830 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.870837 | orchestrator | Monday 10 February 2025 09:20:27 +0000 (0:00:07.489) 0:03:20.535 ******* 2025-02-10 09:31:30.870844 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.870851 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.870857 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.870864 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.870871 | orchestrator | 2025-02-10 09:31:30.870878 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-02-10 09:31:30.870885 | orchestrator | Monday 10 February 2025 09:20:28 +0000 (0:00:01.393) 0:03:21.929 ******* 2025-02-10 09:31:30.870892 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.870918 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.870925 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.870931 | orchestrator | 2025-02-10 09:31:30.870937 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-02-10 09:31:30.870943 | orchestrator | Monday 10 February 2025 09:20:29 +0000 (0:00:01.110) 0:03:23.039 ******* 2025-02-10 09:31:30.870950 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.870957 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.870963 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.870969 | orchestrator | 2025-02-10 09:31:30.870975 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.870981 | orchestrator | Monday 10 February 2025 09:20:31 +0000 (0:00:01.674) 0:03:24.713 ******* 2025-02-10 09:31:30.870988 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.870994 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.871000 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.871006 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871012 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871018 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871024 | orchestrator | 2025-02-10 09:31:30.871031 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.871037 | orchestrator | Monday 10 February 2025 09:20:32 +0000 (0:00:01.279) 0:03:25.993 ******* 2025-02-10 09:31:30.871043 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.871049 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.871055 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.871061 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871068 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871074 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871083 | orchestrator | 2025-02-10 09:31:30.871093 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.871153 | orchestrator | Monday 10 February 2025 09:20:33 +0000 (0:00:00.798) 0:03:26.792 ******* 2025-02-10 09:31:30.871169 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871179 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871189 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871206 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871216 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871226 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871243 | orchestrator | 2025-02-10 09:31:30.871254 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.871264 | orchestrator | Monday 10 February 2025 09:20:34 +0000 (0:00:01.008) 0:03:27.800 ******* 2025-02-10 09:31:30.871274 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871285 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871294 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871304 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871314 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871324 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871334 | orchestrator | 2025-02-10 09:31:30.871345 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.871352 | orchestrator | Monday 10 February 2025 09:20:35 +0000 (0:00:00.787) 0:03:28.588 ******* 2025-02-10 09:31:30.871358 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871364 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871370 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871376 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871382 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871389 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871395 | orchestrator | 2025-02-10 09:31:30.871401 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.871408 | orchestrator | Monday 10 February 2025 09:20:36 +0000 (0:00:00.955) 0:03:29.544 ******* 2025-02-10 09:31:30.871414 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871420 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871426 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871432 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871438 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871444 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871450 | orchestrator | 2025-02-10 09:31:30.871456 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.871463 | orchestrator | Monday 10 February 2025 09:20:36 +0000 (0:00:00.734) 0:03:30.278 ******* 2025-02-10 09:31:30.871470 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871476 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871482 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871488 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871494 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871500 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871506 | orchestrator | 2025-02-10 09:31:30.871512 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.871519 | orchestrator | Monday 10 February 2025 09:20:38 +0000 (0:00:01.235) 0:03:31.514 ******* 2025-02-10 09:31:30.871525 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871531 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871537 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871543 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871550 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871556 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871562 | orchestrator | 2025-02-10 09:31:30.871568 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.871589 | orchestrator | Monday 10 February 2025 09:20:39 +0000 (0:00:01.240) 0:03:32.755 ******* 2025-02-10 09:31:30.871596 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871602 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871608 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871614 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.871620 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.871626 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.871632 | orchestrator | 2025-02-10 09:31:30.871639 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.871651 | orchestrator | Monday 10 February 2025 09:20:41 +0000 (0:00:02.076) 0:03:34.831 ******* 2025-02-10 09:31:30.871657 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.871663 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.871669 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.871675 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871681 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871687 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871694 | orchestrator | 2025-02-10 09:31:30.871700 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.871706 | orchestrator | Monday 10 February 2025 09:20:42 +0000 (0:00:00.849) 0:03:35.681 ******* 2025-02-10 09:31:30.871712 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.871718 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.871725 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.871731 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.871737 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.871743 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.871749 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.871755 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.871761 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.871767 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.871774 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.871780 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871787 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.871794 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.871801 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.871808 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.871814 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.871821 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.871828 | orchestrator | 2025-02-10 09:31:30.871889 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.871921 | orchestrator | Monday 10 February 2025 09:20:43 +0000 (0:00:01.040) 0:03:36.721 ******* 2025-02-10 09:31:30.871929 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-02-10 09:31:30.871939 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-02-10 09:31:30.871947 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-02-10 09:31:30.871954 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-02-10 09:31:30.871961 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-02-10 09:31:30.871968 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-02-10 09:31:30.871975 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:31:30.871982 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:31:30.871988 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.871999 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:31:30.872006 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:31:30.872014 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872021 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:31:30.872028 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:31:30.872035 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872042 | orchestrator | 2025-02-10 09:31:30.872049 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.872056 | orchestrator | Monday 10 February 2025 09:20:44 +0000 (0:00:01.084) 0:03:37.806 ******* 2025-02-10 09:31:30.872063 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.872070 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.872082 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.872089 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872096 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872103 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872110 | orchestrator | 2025-02-10 09:31:30.872117 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.872124 | orchestrator | Monday 10 February 2025 09:20:45 +0000 (0:00:01.158) 0:03:38.965 ******* 2025-02-10 09:31:30.872131 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872138 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.872145 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.872151 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872157 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872163 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872169 | orchestrator | 2025-02-10 09:31:30.872176 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.872182 | orchestrator | Monday 10 February 2025 09:20:46 +0000 (0:00:00.965) 0:03:39.930 ******* 2025-02-10 09:31:30.872188 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872194 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.872201 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.872207 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872214 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872224 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872234 | orchestrator | 2025-02-10 09:31:30.872244 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.872254 | orchestrator | Monday 10 February 2025 09:20:47 +0000 (0:00:01.256) 0:03:41.187 ******* 2025-02-10 09:31:30.872264 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872273 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.872283 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.872292 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872302 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872312 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872321 | orchestrator | 2025-02-10 09:31:30.872331 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.872341 | orchestrator | Monday 10 February 2025 09:20:49 +0000 (0:00:01.525) 0:03:42.713 ******* 2025-02-10 09:31:30.872351 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872360 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.872369 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.872378 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872388 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872398 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872408 | orchestrator | 2025-02-10 09:31:30.872418 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.872428 | orchestrator | Monday 10 February 2025 09:20:50 +0000 (0:00:01.468) 0:03:44.181 ******* 2025-02-10 09:31:30.872437 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.872448 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.872458 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872468 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.872478 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872487 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872497 | orchestrator | 2025-02-10 09:31:30.872507 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.872518 | orchestrator | Monday 10 February 2025 09:20:51 +0000 (0:00:00.948) 0:03:45.129 ******* 2025-02-10 09:31:30.872525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.872531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.872537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.872553 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872563 | orchestrator | 2025-02-10 09:31:30.872573 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.872583 | orchestrator | Monday 10 February 2025 09:20:52 +0000 (0:00:00.553) 0:03:45.683 ******* 2025-02-10 09:31:30.872594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.872605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.872675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.872685 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872692 | orchestrator | 2025-02-10 09:31:30.872698 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.872704 | orchestrator | Monday 10 February 2025 09:20:52 +0000 (0:00:00.489) 0:03:46.172 ******* 2025-02-10 09:31:30.872711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.872717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.872724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.872730 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872736 | orchestrator | 2025-02-10 09:31:30.872743 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.872752 | orchestrator | Monday 10 February 2025 09:20:53 +0000 (0:00:01.060) 0:03:47.233 ******* 2025-02-10 09:31:30.872762 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.872772 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.872782 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.872792 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872803 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872813 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872824 | orchestrator | 2025-02-10 09:31:30.872834 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.872845 | orchestrator | Monday 10 February 2025 09:20:55 +0000 (0:00:01.160) 0:03:48.394 ******* 2025-02-10 09:31:30.872853 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-10 09:31:30.872859 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.872866 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.872872 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:31:30.872884 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:31:30.872890 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.872915 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.872922 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.872928 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.872935 | orchestrator | 2025-02-10 09:31:30.872945 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.872955 | orchestrator | Monday 10 February 2025 09:20:57 +0000 (0:00:02.064) 0:03:50.458 ******* 2025-02-10 09:31:30.872965 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.872975 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.872985 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.872995 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.873005 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.873015 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.873025 | orchestrator | 2025-02-10 09:31:30.873035 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.873046 | orchestrator | Monday 10 February 2025 09:20:58 +0000 (0:00:00.972) 0:03:51.431 ******* 2025-02-10 09:31:30.873056 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873066 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.873076 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.873083 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.873089 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.873095 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.873107 | orchestrator | 2025-02-10 09:31:30.873113 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.873120 | orchestrator | Monday 10 February 2025 09:20:58 +0000 (0:00:00.858) 0:03:52.289 ******* 2025-02-10 09:31:30.873126 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.873133 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873139 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.873145 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.873151 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.873157 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.873180 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.873186 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.873192 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.873198 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.873204 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.873210 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.873217 | orchestrator | 2025-02-10 09:31:30.873223 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.873229 | orchestrator | Monday 10 February 2025 09:21:00 +0000 (0:00:01.630) 0:03:53.920 ******* 2025-02-10 09:31:30.873236 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.873242 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873249 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.873255 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.873261 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.873267 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.873274 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.873280 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.873286 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.873292 | orchestrator | 2025-02-10 09:31:30.873298 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.873305 | orchestrator | Monday 10 February 2025 09:21:01 +0000 (0:00:01.215) 0:03:55.136 ******* 2025-02-10 09:31:30.873312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.873323 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.873331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.873388 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.873397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.873405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.873412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.873420 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.873427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.873434 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.873442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.873449 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.873463 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:31:30.873475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:31:30.873482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:31:30.873490 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.873511 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.873522 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.873532 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.873542 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:31:30.873553 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:31:30.873562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:31:30.873572 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.873583 | orchestrator | 2025-02-10 09:31:30.873592 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.873598 | orchestrator | Monday 10 February 2025 09:21:03 +0000 (0:00:01.969) 0:03:57.106 ******* 2025-02-10 09:31:30.873604 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.873610 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.873617 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.873623 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.873629 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.873635 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.873641 | orchestrator | 2025-02-10 09:31:30.873647 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:31:30.873653 | orchestrator | Monday 10 February 2025 09:21:10 +0000 (0:00:06.907) 0:04:04.013 ******* 2025-02-10 09:31:30.873659 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.873666 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.873672 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.873678 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.873684 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.873690 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.873696 | orchestrator | 2025-02-10 09:31:30.873702 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-10 09:31:30.873709 | orchestrator | Monday 10 February 2025 09:21:12 +0000 (0:00:02.021) 0:04:06.035 ******* 2025-02-10 09:31:30.873715 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873721 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.873727 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.873733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.873739 | orchestrator | 2025-02-10 09:31:30.873746 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-10 09:31:30.873752 | orchestrator | Monday 10 February 2025 09:21:14 +0000 (0:00:01.830) 0:04:07.866 ******* 2025-02-10 09:31:30.873758 | orchestrator | 2025-02-10 09:31:30.873764 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-02-10 09:31:30.873770 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.873777 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.873783 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.873797 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.873803 | orchestrator | 2025-02-10 09:31:30.873809 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-02-10 09:31:30.873815 | orchestrator | Monday 10 February 2025 09:21:16 +0000 (0:00:01.614) 0:04:09.480 ******* 2025-02-10 09:31:30.873822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.873828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.873834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.873840 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873846 | orchestrator | 2025-02-10 09:31:30.873852 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-02-10 09:31:30.873858 | orchestrator | Monday 10 February 2025 09:21:16 +0000 (0:00:00.609) 0:04:10.090 ******* 2025-02-10 09:31:30.873865 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873871 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.873881 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.873887 | orchestrator | 2025-02-10 09:31:30.873928 | orchestrator | TASK [ceph-handler : set _osd_handler_called before restart] ******************* 2025-02-10 09:31:30.873936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.873942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.873949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.873955 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.873961 | orchestrator | 2025-02-10 09:31:30.873967 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-02-10 09:31:30.873973 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:01.728) 0:04:11.819 ******* 2025-02-10 09:31:30.873979 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.873986 | orchestrator | 2025-02-10 09:31:30.873992 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-10 09:31:30.874077 | orchestrator | Monday 10 February 2025 09:21:18 +0000 (0:00:00.346) 0:04:12.165 ******* 2025-02-10 09:31:30.874088 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874099 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.874109 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.874119 | orchestrator | 2025-02-10 09:31:30.874129 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-10 09:31:30.874138 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874148 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.874158 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.874169 | orchestrator | 2025-02-10 09:31:30.874176 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-02-10 09:31:30.874182 | orchestrator | Monday 10 February 2025 09:21:20 +0000 (0:00:01.223) 0:04:13.389 ******* 2025-02-10 09:31:30.874188 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874194 | orchestrator | 2025-02-10 09:31:30.874201 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-02-10 09:31:30.874207 | orchestrator | Monday 10 February 2025 09:21:20 +0000 (0:00:00.291) 0:04:13.680 ******* 2025-02-10 09:31:30.874213 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874219 | orchestrator | 2025-02-10 09:31:30.874226 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-10 09:31:30.874232 | orchestrator | Monday 10 February 2025 09:21:20 +0000 (0:00:00.358) 0:04:14.038 ******* 2025-02-10 09:31:30.874238 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874244 | orchestrator | 2025-02-10 09:31:30.874250 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-02-10 09:31:30.874256 | orchestrator | Monday 10 February 2025 09:21:20 +0000 (0:00:00.157) 0:04:14.196 ******* 2025-02-10 09:31:30.874262 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874268 | orchestrator | 2025-02-10 09:31:30.874274 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-02-10 09:31:30.874280 | orchestrator | Monday 10 February 2025 09:21:21 +0000 (0:00:00.325) 0:04:14.521 ******* 2025-02-10 09:31:30.874287 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874295 | orchestrator | 2025-02-10 09:31:30.874305 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-02-10 09:31:30.874315 | orchestrator | Monday 10 February 2025 09:21:21 +0000 (0:00:00.330) 0:04:14.851 ******* 2025-02-10 09:31:30.874325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.874334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.874344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.874355 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874364 | orchestrator | 2025-02-10 09:31:30.874375 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-02-10 09:31:30.874386 | orchestrator | Monday 10 February 2025 09:21:22 +0000 (0:00:00.587) 0:04:15.438 ******* 2025-02-10 09:31:30.874398 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874404 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.874410 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.874415 | orchestrator | 2025-02-10 09:31:30.874421 | orchestrator | TASK [ceph-handler : set _osd_handler_called after restart] ******************** 2025-02-10 09:31:30.874427 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874433 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.874439 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.874445 | orchestrator | 2025-02-10 09:31:30.874450 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-02-10 09:31:30.874456 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:01.090) 0:04:16.529 ******* 2025-02-10 09:31:30.874462 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874468 | orchestrator | 2025-02-10 09:31:30.874474 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-02-10 09:31:30.874479 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:00.289) 0:04:16.818 ******* 2025-02-10 09:31:30.874485 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.874491 | orchestrator | 2025-02-10 09:31:30.874497 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-10 09:31:30.874502 | orchestrator | Monday 10 February 2025 09:21:23 +0000 (0:00:00.285) 0:04:17.104 ******* 2025-02-10 09:31:30.874509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.874514 | orchestrator | 2025-02-10 09:31:30.874521 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-02-10 09:31:30.874526 | orchestrator | Monday 10 February 2025 09:21:24 +0000 (0:00:01.212) 0:04:18.317 ******* 2025-02-10 09:31:30.874532 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.874538 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.874544 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.874550 | orchestrator | 2025-02-10 09:31:30.874555 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-10 09:31:30.874561 | orchestrator | Monday 10 February 2025 09:21:26 +0000 (0:00:01.585) 0:04:19.902 ******* 2025-02-10 09:31:30.874567 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874573 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.874579 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.874585 | orchestrator | 2025-02-10 09:31:30.874591 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-10 09:31:30.874596 | orchestrator | Monday 10 February 2025 09:21:27 +0000 (0:00:00.707) 0:04:20.610 ******* 2025-02-10 09:31:30.874602 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874608 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.874614 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.874620 | orchestrator | 2025-02-10 09:31:30.874626 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-10 09:31:30.874649 | orchestrator | Monday 10 February 2025 09:21:28 +0000 (0:00:00.866) 0:04:21.477 ******* 2025-02-10 09:31:30.874655 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874661 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.874667 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.874673 | orchestrator | 2025-02-10 09:31:30.874679 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-10 09:31:30.874732 | orchestrator | Monday 10 February 2025 09:21:28 +0000 (0:00:00.667) 0:04:22.145 ******* 2025-02-10 09:31:30.874742 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.874748 | orchestrator | 2025-02-10 09:31:30.874754 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-10 09:31:30.874761 | orchestrator | Monday 10 February 2025 09:21:30 +0000 (0:00:01.254) 0:04:23.399 ******* 2025-02-10 09:31:30.874767 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.874779 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.874785 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.874791 | orchestrator | 2025-02-10 09:31:30.874798 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-10 09:31:30.874804 | orchestrator | Monday 10 February 2025 09:21:30 +0000 (0:00:00.600) 0:04:23.999 ******* 2025-02-10 09:31:30.874810 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.874816 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.874823 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.874829 | orchestrator | 2025-02-10 09:31:30.874835 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-10 09:31:30.874841 | orchestrator | Monday 10 February 2025 09:21:32 +0000 (0:00:01.573) 0:04:25.573 ******* 2025-02-10 09:31:30.874847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.874854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.874860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.874866 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874873 | orchestrator | 2025-02-10 09:31:30.874879 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-10 09:31:30.874885 | orchestrator | Monday 10 February 2025 09:21:33 +0000 (0:00:01.019) 0:04:26.593 ******* 2025-02-10 09:31:30.874891 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.874916 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.874927 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.874933 | orchestrator | 2025-02-10 09:31:30.874939 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.874945 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.803) 0:04:27.396 ******* 2025-02-10 09:31:30.874951 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.874956 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.874962 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.874973 | orchestrator | 2025-02-10 09:31:30.874979 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-10 09:31:30.874984 | orchestrator | Monday 10 February 2025 09:21:34 +0000 (0:00:00.358) 0:04:27.755 ******* 2025-02-10 09:31:30.874990 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.874996 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.875002 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.875008 | orchestrator | 2025-02-10 09:31:30.875014 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-02-10 09:31:30.875023 | orchestrator | Monday 10 February 2025 09:21:35 +0000 (0:00:01.350) 0:04:29.105 ******* 2025-02-10 09:31:30.875029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.875035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.875041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.875047 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.875053 | orchestrator | 2025-02-10 09:31:30.875059 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-10 09:31:30.875065 | orchestrator | Monday 10 February 2025 09:21:36 +0000 (0:00:01.011) 0:04:30.117 ******* 2025-02-10 09:31:30.875071 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.875077 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.875082 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.875088 | orchestrator | 2025-02-10 09:31:30.875094 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-10 09:31:30.875101 | orchestrator | Monday 10 February 2025 09:21:37 +0000 (0:00:00.526) 0:04:30.644 ******* 2025-02-10 09:31:30.875111 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.875120 | orchestrator | 2025-02-10 09:31:30.875130 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-10 09:31:30.875146 | orchestrator | Monday 10 February 2025 09:21:38 +0000 (0:00:00.755) 0:04:31.399 ******* 2025-02-10 09:31:30.875155 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.875164 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.875174 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.875183 | orchestrator | 2025-02-10 09:31:30.875194 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-10 09:31:30.875200 | orchestrator | Monday 10 February 2025 09:21:38 +0000 (0:00:00.540) 0:04:31.940 ******* 2025-02-10 09:31:30.875206 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.875212 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.875217 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.875223 | orchestrator | 2025-02-10 09:31:30.875229 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-10 09:31:30.875235 | orchestrator | Monday 10 February 2025 09:21:39 +0000 (0:00:01.190) 0:04:33.131 ******* 2025-02-10 09:31:30.875241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.875247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.875253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.875258 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.875264 | orchestrator | 2025-02-10 09:31:30.875270 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-10 09:31:30.875276 | orchestrator | Monday 10 February 2025 09:21:40 +0000 (0:00:00.817) 0:04:33.948 ******* 2025-02-10 09:31:30.875282 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.875291 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.875302 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.875311 | orchestrator | 2025-02-10 09:31:30.875373 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-10 09:31:30.875384 | orchestrator | Monday 10 February 2025 09:21:41 +0000 (0:00:00.438) 0:04:34.387 ******* 2025-02-10 09:31:30.875394 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.875405 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.875414 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.875423 | orchestrator | 2025-02-10 09:31:30.875433 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-10 09:31:30.875443 | orchestrator | Monday 10 February 2025 09:21:41 +0000 (0:00:00.683) 0:04:35.070 ******* 2025-02-10 09:31:30.875453 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.875463 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.875473 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.875482 | orchestrator | 2025-02-10 09:31:30.875492 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-02-10 09:31:30.875502 | orchestrator | Monday 10 February 2025 09:21:42 +0000 (0:00:00.422) 0:04:35.493 ******* 2025-02-10 09:31:30.875512 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.875522 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.875531 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.875541 | orchestrator | 2025-02-10 09:31:30.875551 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.875557 | orchestrator | Monday 10 February 2025 09:21:42 +0000 (0:00:00.414) 0:04:35.907 ******* 2025-02-10 09:31:30.875563 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.875569 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.875575 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.875580 | orchestrator | 2025-02-10 09:31:30.875587 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-02-10 09:31:30.875597 | orchestrator | 2025-02-10 09:31:30.875606 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.875615 | orchestrator | Monday 10 February 2025 09:21:45 +0000 (0:00:02.899) 0:04:38.807 ******* 2025-02-10 09:31:30.875625 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.875642 | orchestrator | 2025-02-10 09:31:30.875653 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.875662 | orchestrator | Monday 10 February 2025 09:21:46 +0000 (0:00:00.856) 0:04:39.663 ******* 2025-02-10 09:31:30.875672 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.875681 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.875691 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.875700 | orchestrator | 2025-02-10 09:31:30.875710 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.875719 | orchestrator | Monday 10 February 2025 09:21:47 +0000 (0:00:01.208) 0:04:40.871 ******* 2025-02-10 09:31:30.875728 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.875737 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.875743 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.875749 | orchestrator | 2025-02-10 09:31:30.875755 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.875761 | orchestrator | Monday 10 February 2025 09:21:48 +0000 (0:00:00.666) 0:04:41.538 ******* 2025-02-10 09:31:30.875766 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.875772 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.875778 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.875784 | orchestrator | 2025-02-10 09:31:30.875790 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.875796 | orchestrator | Monday 10 February 2025 09:21:48 +0000 (0:00:00.526) 0:04:42.065 ******* 2025-02-10 09:31:30.875801 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.875807 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.875813 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.875819 | orchestrator | 2025-02-10 09:31:30.875830 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.875835 | orchestrator | Monday 10 February 2025 09:21:49 +0000 (0:00:00.559) 0:04:42.624 ******* 2025-02-10 09:31:30.875841 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.875860 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.875866 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.875872 | orchestrator | 2025-02-10 09:31:30.875878 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.875884 | orchestrator | Monday 10 February 2025 09:21:50 +0000 (0:00:01.079) 0:04:43.703 ******* 2025-02-10 09:31:30.875890 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.875912 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.875918 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.875924 | orchestrator | 2025-02-10 09:31:30.875930 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.875936 | orchestrator | Monday 10 February 2025 09:21:50 +0000 (0:00:00.628) 0:04:44.331 ******* 2025-02-10 09:31:30.875941 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.875947 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.875954 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.875960 | orchestrator | 2025-02-10 09:31:30.875965 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.875971 | orchestrator | Monday 10 February 2025 09:21:51 +0000 (0:00:00.438) 0:04:44.770 ******* 2025-02-10 09:31:30.875977 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.875983 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.875994 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876000 | orchestrator | 2025-02-10 09:31:30.876006 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.876013 | orchestrator | Monday 10 February 2025 09:21:51 +0000 (0:00:00.438) 0:04:45.208 ******* 2025-02-10 09:31:30.876020 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876027 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876033 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876040 | orchestrator | 2025-02-10 09:31:30.876051 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.876058 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:00.450) 0:04:45.659 ******* 2025-02-10 09:31:30.876126 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876139 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876147 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876157 | orchestrator | 2025-02-10 09:31:30.876165 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.876175 | orchestrator | Monday 10 February 2025 09:21:52 +0000 (0:00:00.578) 0:04:46.237 ******* 2025-02-10 09:31:30.876183 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.876191 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.876199 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.876208 | orchestrator | 2025-02-10 09:31:30.876216 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.876226 | orchestrator | Monday 10 February 2025 09:21:53 +0000 (0:00:00.708) 0:04:46.946 ******* 2025-02-10 09:31:30.876234 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876243 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876253 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876263 | orchestrator | 2025-02-10 09:31:30.876273 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.876279 | orchestrator | Monday 10 February 2025 09:21:53 +0000 (0:00:00.281) 0:04:47.227 ******* 2025-02-10 09:31:30.876285 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.876291 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.876297 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.876303 | orchestrator | 2025-02-10 09:31:30.876309 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.876315 | orchestrator | Monday 10 February 2025 09:21:54 +0000 (0:00:00.384) 0:04:47.612 ******* 2025-02-10 09:31:30.876321 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876327 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876332 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876338 | orchestrator | 2025-02-10 09:31:30.876344 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.876350 | orchestrator | Monday 10 February 2025 09:21:54 +0000 (0:00:00.489) 0:04:48.101 ******* 2025-02-10 09:31:30.876356 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876361 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876367 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876373 | orchestrator | 2025-02-10 09:31:30.876379 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.876385 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:00.343) 0:04:48.445 ******* 2025-02-10 09:31:30.876391 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876397 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876402 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876409 | orchestrator | 2025-02-10 09:31:30.876414 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.876420 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:00.395) 0:04:48.841 ******* 2025-02-10 09:31:30.876426 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876432 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876438 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876444 | orchestrator | 2025-02-10 09:31:30.876450 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.876456 | orchestrator | Monday 10 February 2025 09:21:55 +0000 (0:00:00.343) 0:04:49.185 ******* 2025-02-10 09:31:30.876462 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876468 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876474 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876479 | orchestrator | 2025-02-10 09:31:30.876485 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.876497 | orchestrator | Monday 10 February 2025 09:21:56 +0000 (0:00:00.700) 0:04:49.886 ******* 2025-02-10 09:31:30.876503 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.876509 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.876515 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.876521 | orchestrator | 2025-02-10 09:31:30.876527 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.876533 | orchestrator | Monday 10 February 2025 09:21:56 +0000 (0:00:00.439) 0:04:50.326 ******* 2025-02-10 09:31:30.876539 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.876544 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.876550 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.876556 | orchestrator | 2025-02-10 09:31:30.876566 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.876572 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:00.509) 0:04:50.836 ******* 2025-02-10 09:31:30.876578 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876584 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876590 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876596 | orchestrator | 2025-02-10 09:31:30.876602 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.876607 | orchestrator | Monday 10 February 2025 09:21:57 +0000 (0:00:00.443) 0:04:51.279 ******* 2025-02-10 09:31:30.876613 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876619 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876625 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876631 | orchestrator | 2025-02-10 09:31:30.876637 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.876643 | orchestrator | Monday 10 February 2025 09:21:58 +0000 (0:00:00.862) 0:04:52.142 ******* 2025-02-10 09:31:30.876648 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876654 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876660 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876666 | orchestrator | 2025-02-10 09:31:30.876672 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.876679 | orchestrator | Monday 10 February 2025 09:21:59 +0000 (0:00:00.472) 0:04:52.614 ******* 2025-02-10 09:31:30.876689 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876698 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876707 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876717 | orchestrator | 2025-02-10 09:31:30.876726 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.876736 | orchestrator | Monday 10 February 2025 09:21:59 +0000 (0:00:00.440) 0:04:53.055 ******* 2025-02-10 09:31:30.876747 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876814 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876828 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876837 | orchestrator | 2025-02-10 09:31:30.876846 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.876855 | orchestrator | Monday 10 February 2025 09:22:00 +0000 (0:00:00.437) 0:04:53.492 ******* 2025-02-10 09:31:30.876864 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876873 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876883 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.876893 | orchestrator | 2025-02-10 09:31:30.876949 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.876960 | orchestrator | Monday 10 February 2025 09:22:00 +0000 (0:00:00.825) 0:04:54.318 ******* 2025-02-10 09:31:30.876971 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.876981 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.876992 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877000 | orchestrator | 2025-02-10 09:31:30.877010 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.877021 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:00.433) 0:04:54.751 ******* 2025-02-10 09:31:30.877043 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877053 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877063 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877072 | orchestrator | 2025-02-10 09:31:30.877082 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.877092 | orchestrator | Monday 10 February 2025 09:22:01 +0000 (0:00:00.487) 0:04:55.239 ******* 2025-02-10 09:31:30.877107 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877116 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877125 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877134 | orchestrator | 2025-02-10 09:31:30.877143 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.877152 | orchestrator | Monday 10 February 2025 09:22:02 +0000 (0:00:00.546) 0:04:55.786 ******* 2025-02-10 09:31:30.877161 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877169 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877177 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877186 | orchestrator | 2025-02-10 09:31:30.877195 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.877204 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:00.744) 0:04:56.530 ******* 2025-02-10 09:31:30.877212 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877221 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877230 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877238 | orchestrator | 2025-02-10 09:31:30.877247 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.877256 | orchestrator | Monday 10 February 2025 09:22:03 +0000 (0:00:00.405) 0:04:56.935 ******* 2025-02-10 09:31:30.877265 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877273 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877282 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877290 | orchestrator | 2025-02-10 09:31:30.877300 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.877309 | orchestrator | Monday 10 February 2025 09:22:04 +0000 (0:00:00.527) 0:04:57.463 ******* 2025-02-10 09:31:30.877319 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.877327 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.877336 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877345 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.877354 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.877404 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877414 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.877423 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.877432 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877441 | orchestrator | 2025-02-10 09:31:30.877450 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.877459 | orchestrator | Monday 10 February 2025 09:22:04 +0000 (0:00:00.559) 0:04:58.023 ******* 2025-02-10 09:31:30.877468 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:31:30.877477 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:31:30.877486 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877496 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:31:30.877506 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:31:30.877515 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877524 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:31:30.877533 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:31:30.877542 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877559 | orchestrator | 2025-02-10 09:31:30.877568 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.877577 | orchestrator | Monday 10 February 2025 09:22:05 +0000 (0:00:00.956) 0:04:58.980 ******* 2025-02-10 09:31:30.877586 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877595 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877604 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877613 | orchestrator | 2025-02-10 09:31:30.877622 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.877632 | orchestrator | Monday 10 February 2025 09:22:06 +0000 (0:00:00.463) 0:04:59.443 ******* 2025-02-10 09:31:30.877641 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877650 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877659 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877668 | orchestrator | 2025-02-10 09:31:30.877677 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.877764 | orchestrator | Monday 10 February 2025 09:22:06 +0000 (0:00:00.507) 0:04:59.950 ******* 2025-02-10 09:31:30.877778 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877788 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877796 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877805 | orchestrator | 2025-02-10 09:31:30.877813 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.877822 | orchestrator | Monday 10 February 2025 09:22:07 +0000 (0:00:00.745) 0:05:00.696 ******* 2025-02-10 09:31:30.877831 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877840 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877849 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877858 | orchestrator | 2025-02-10 09:31:30.877867 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.877881 | orchestrator | Monday 10 February 2025 09:22:07 +0000 (0:00:00.488) 0:05:01.185 ******* 2025-02-10 09:31:30.877890 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877912 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877922 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877930 | orchestrator | 2025-02-10 09:31:30.877939 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.877947 | orchestrator | Monday 10 February 2025 09:22:08 +0000 (0:00:00.473) 0:05:01.658 ******* 2025-02-10 09:31:30.877952 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.877957 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.877963 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.877968 | orchestrator | 2025-02-10 09:31:30.877973 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.877981 | orchestrator | Monday 10 February 2025 09:22:08 +0000 (0:00:00.446) 0:05:02.105 ******* 2025-02-10 09:31:30.877990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.877998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.878006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.878035 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878046 | orchestrator | 2025-02-10 09:31:30.878056 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.878065 | orchestrator | Monday 10 February 2025 09:22:09 +0000 (0:00:00.914) 0:05:03.020 ******* 2025-02-10 09:31:30.878074 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.878083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.878093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.878102 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878112 | orchestrator | 2025-02-10 09:31:30.878121 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.878138 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:01.354) 0:05:04.374 ******* 2025-02-10 09:31:30.878148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.878161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.878170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.878180 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878189 | orchestrator | 2025-02-10 09:31:30.878199 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.878208 | orchestrator | Monday 10 February 2025 09:22:11 +0000 (0:00:00.687) 0:05:05.062 ******* 2025-02-10 09:31:30.878217 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878230 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878239 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878249 | orchestrator | 2025-02-10 09:31:30.878258 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.878268 | orchestrator | Monday 10 February 2025 09:22:12 +0000 (0:00:00.489) 0:05:05.552 ******* 2025-02-10 09:31:30.878277 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.878286 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878296 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.878305 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878312 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.878318 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878325 | orchestrator | 2025-02-10 09:31:30.878334 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.878343 | orchestrator | Monday 10 February 2025 09:22:12 +0000 (0:00:00.771) 0:05:06.323 ******* 2025-02-10 09:31:30.878352 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878359 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878367 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878375 | orchestrator | 2025-02-10 09:31:30.878383 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.878391 | orchestrator | Monday 10 February 2025 09:22:13 +0000 (0:00:00.487) 0:05:06.810 ******* 2025-02-10 09:31:30.878400 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878408 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878417 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878426 | orchestrator | 2025-02-10 09:31:30.878435 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.878444 | orchestrator | Monday 10 February 2025 09:22:14 +0000 (0:00:00.866) 0:05:07.676 ******* 2025-02-10 09:31:30.878453 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.878462 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878471 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.878480 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878489 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.878498 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878507 | orchestrator | 2025-02-10 09:31:30.878516 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.878524 | orchestrator | Monday 10 February 2025 09:22:15 +0000 (0:00:00.982) 0:05:08.659 ******* 2025-02-10 09:31:30.878533 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878583 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878592 | orchestrator | 2025-02-10 09:31:30.878601 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.878610 | orchestrator | Monday 10 February 2025 09:22:15 +0000 (0:00:00.526) 0:05:09.186 ******* 2025-02-10 09:31:30.878619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.878628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.878644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.878652 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:31:30.878658 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:31:30.878664 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:31:30.878670 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878676 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:31:30.878688 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:31:30.878694 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:31:30.878699 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878705 | orchestrator | 2025-02-10 09:31:30.878711 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.878717 | orchestrator | Monday 10 February 2025 09:22:17 +0000 (0:00:01.504) 0:05:10.690 ******* 2025-02-10 09:31:30.878723 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878729 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878735 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878741 | orchestrator | 2025-02-10 09:31:30.878747 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.878753 | orchestrator | Monday 10 February 2025 09:22:18 +0000 (0:00:01.014) 0:05:11.705 ******* 2025-02-10 09:31:30.878759 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878764 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878769 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878774 | orchestrator | 2025-02-10 09:31:30.878780 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.878785 | orchestrator | Monday 10 February 2025 09:22:19 +0000 (0:00:01.078) 0:05:12.784 ******* 2025-02-10 09:31:30.878790 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878796 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878804 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878813 | orchestrator | 2025-02-10 09:31:30.878822 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.878836 | orchestrator | Monday 10 February 2025 09:22:20 +0000 (0:00:00.663) 0:05:13.447 ******* 2025-02-10 09:31:30.878845 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878853 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.878863 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.878871 | orchestrator | 2025-02-10 09:31:30.878884 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-02-10 09:31:30.878906 | orchestrator | Monday 10 February 2025 09:22:20 +0000 (0:00:00.753) 0:05:14.200 ******* 2025-02-10 09:31:30.878915 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.878924 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.878932 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.878940 | orchestrator | 2025-02-10 09:31:30.878949 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-02-10 09:31:30.878958 | orchestrator | Monday 10 February 2025 09:22:21 +0000 (0:00:00.451) 0:05:14.652 ******* 2025-02-10 09:31:30.878967 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.878975 | orchestrator | 2025-02-10 09:31:30.878980 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-02-10 09:31:30.878986 | orchestrator | Monday 10 February 2025 09:22:22 +0000 (0:00:00.876) 0:05:15.528 ******* 2025-02-10 09:31:30.878991 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.878996 | orchestrator | 2025-02-10 09:31:30.879002 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-02-10 09:31:30.879007 | orchestrator | Monday 10 February 2025 09:22:22 +0000 (0:00:00.176) 0:05:15.705 ******* 2025-02-10 09:31:30.879012 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-10 09:31:30.879025 | orchestrator | 2025-02-10 09:31:30.879033 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-02-10 09:31:30.879042 | orchestrator | Monday 10 February 2025 09:22:23 +0000 (0:00:00.666) 0:05:16.371 ******* 2025-02-10 09:31:30.879050 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879058 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.879067 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.879075 | orchestrator | 2025-02-10 09:31:30.879084 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-02-10 09:31:30.879092 | orchestrator | Monday 10 February 2025 09:22:23 +0000 (0:00:00.482) 0:05:16.853 ******* 2025-02-10 09:31:30.879100 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879108 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.879116 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.879124 | orchestrator | 2025-02-10 09:31:30.879132 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-02-10 09:31:30.879141 | orchestrator | Monday 10 February 2025 09:22:24 +0000 (0:00:00.557) 0:05:17.410 ******* 2025-02-10 09:31:30.879149 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879157 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879165 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879173 | orchestrator | 2025-02-10 09:31:30.879181 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-02-10 09:31:30.879190 | orchestrator | Monday 10 February 2025 09:22:25 +0000 (0:00:01.525) 0:05:18.936 ******* 2025-02-10 09:31:30.879199 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879208 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879217 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879226 | orchestrator | 2025-02-10 09:31:30.879265 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-02-10 09:31:30.879271 | orchestrator | Monday 10 February 2025 09:22:26 +0000 (0:00:00.787) 0:05:19.723 ******* 2025-02-10 09:31:30.879276 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879286 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879292 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879300 | orchestrator | 2025-02-10 09:31:30.879305 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-02-10 09:31:30.879311 | orchestrator | Monday 10 February 2025 09:22:27 +0000 (0:00:00.712) 0:05:20.435 ******* 2025-02-10 09:31:30.879316 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879322 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.879327 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.879332 | orchestrator | 2025-02-10 09:31:30.879337 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-02-10 09:31:30.879343 | orchestrator | Monday 10 February 2025 09:22:27 +0000 (0:00:00.732) 0:05:21.168 ******* 2025-02-10 09:31:30.879348 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.879353 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.879359 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.879364 | orchestrator | 2025-02-10 09:31:30.879369 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-02-10 09:31:30.879375 | orchestrator | Monday 10 February 2025 09:22:28 +0000 (0:00:00.612) 0:05:21.781 ******* 2025-02-10 09:31:30.879380 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879385 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.879390 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.879396 | orchestrator | 2025-02-10 09:31:30.879401 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-02-10 09:31:30.879406 | orchestrator | Monday 10 February 2025 09:22:28 +0000 (0:00:00.442) 0:05:22.224 ******* 2025-02-10 09:31:30.879411 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.879418 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.879428 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.879437 | orchestrator | 2025-02-10 09:31:30.879447 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-02-10 09:31:30.879462 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.559) 0:05:22.784 ******* 2025-02-10 09:31:30.879471 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879479 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.879488 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.879497 | orchestrator | 2025-02-10 09:31:30.879506 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-02-10 09:31:30.879516 | orchestrator | Monday 10 February 2025 09:22:29 +0000 (0:00:00.481) 0:05:23.266 ******* 2025-02-10 09:31:30.879526 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879531 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879536 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879541 | orchestrator | 2025-02-10 09:31:30.879547 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-02-10 09:31:30.879552 | orchestrator | Monday 10 February 2025 09:22:31 +0000 (0:00:01.863) 0:05:25.130 ******* 2025-02-10 09:31:30.879557 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.879562 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.879568 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.879573 | orchestrator | 2025-02-10 09:31:30.879582 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-02-10 09:31:30.879588 | orchestrator | Monday 10 February 2025 09:22:32 +0000 (0:00:00.453) 0:05:25.583 ******* 2025-02-10 09:31:30.879593 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.879599 | orchestrator | 2025-02-10 09:31:30.879604 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-02-10 09:31:30.879609 | orchestrator | Monday 10 February 2025 09:22:33 +0000 (0:00:00.879) 0:05:26.462 ******* 2025-02-10 09:31:30.879614 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.879620 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.879625 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.879630 | orchestrator | 2025-02-10 09:31:30.879635 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-02-10 09:31:30.879640 | orchestrator | Monday 10 February 2025 09:22:33 +0000 (0:00:00.711) 0:05:27.174 ******* 2025-02-10 09:31:30.879646 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.879651 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.879656 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.879661 | orchestrator | 2025-02-10 09:31:30.879666 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-02-10 09:31:30.879672 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:00.453) 0:05:27.627 ******* 2025-02-10 09:31:30.879677 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.879683 | orchestrator | 2025-02-10 09:31:30.879688 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-02-10 09:31:30.879694 | orchestrator | Monday 10 February 2025 09:22:34 +0000 (0:00:00.664) 0:05:28.292 ******* 2025-02-10 09:31:30.879699 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879704 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879709 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879724 | orchestrator | 2025-02-10 09:31:30.879730 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-02-10 09:31:30.879735 | orchestrator | Monday 10 February 2025 09:22:36 +0000 (0:00:01.890) 0:05:30.182 ******* 2025-02-10 09:31:30.879740 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879745 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879751 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879756 | orchestrator | 2025-02-10 09:31:30.879761 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-02-10 09:31:30.879766 | orchestrator | Monday 10 February 2025 09:22:38 +0000 (0:00:01.273) 0:05:31.456 ******* 2025-02-10 09:31:30.879780 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879785 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879790 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879795 | orchestrator | 2025-02-10 09:31:30.879821 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-02-10 09:31:30.879828 | orchestrator | Monday 10 February 2025 09:22:39 +0000 (0:00:01.809) 0:05:33.265 ******* 2025-02-10 09:31:30.879833 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.879838 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.879844 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.879849 | orchestrator | 2025-02-10 09:31:30.879854 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-02-10 09:31:30.879860 | orchestrator | Monday 10 February 2025 09:22:42 +0000 (0:00:02.664) 0:05:35.929 ******* 2025-02-10 09:31:30.879865 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.879870 | orchestrator | 2025-02-10 09:31:30.879876 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-02-10 09:31:30.879881 | orchestrator | Monday 10 February 2025 09:22:43 +0000 (0:00:00.666) 0:05:36.596 ******* 2025-02-10 09:31:30.879887 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-02-10 09:31:30.879892 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879932 | orchestrator | 2025-02-10 09:31:30.879938 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-02-10 09:31:30.879944 | orchestrator | Monday 10 February 2025 09:23:04 +0000 (0:00:21.743) 0:05:58.340 ******* 2025-02-10 09:31:30.879949 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.879955 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.879960 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.879965 | orchestrator | 2025-02-10 09:31:30.879971 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-02-10 09:31:30.879976 | orchestrator | Monday 10 February 2025 09:23:12 +0000 (0:00:07.602) 0:06:05.942 ******* 2025-02-10 09:31:30.879981 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.879987 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.879992 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.879997 | orchestrator | 2025-02-10 09:31:30.880003 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:31:30.880008 | orchestrator | Monday 10 February 2025 09:23:14 +0000 (0:00:01.555) 0:06:07.498 ******* 2025-02-10 09:31:30.880013 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.880019 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.880024 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.880029 | orchestrator | 2025-02-10 09:31:30.880035 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-10 09:31:30.880040 | orchestrator | Monday 10 February 2025 09:23:15 +0000 (0:00:01.035) 0:06:08.533 ******* 2025-02-10 09:31:30.880045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.880051 | orchestrator | 2025-02-10 09:31:30.880056 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-02-10 09:31:30.880061 | orchestrator | Monday 10 February 2025 09:23:16 +0000 (0:00:01.093) 0:06:09.627 ******* 2025-02-10 09:31:30.880066 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880072 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880077 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880132 | orchestrator | 2025-02-10 09:31:30.880138 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-10 09:31:30.880143 | orchestrator | Monday 10 February 2025 09:23:16 +0000 (0:00:00.461) 0:06:10.089 ******* 2025-02-10 09:31:30.880149 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.880154 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.880160 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.880169 | orchestrator | 2025-02-10 09:31:30.880178 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-02-10 09:31:30.880184 | orchestrator | Monday 10 February 2025 09:23:18 +0000 (0:00:01.480) 0:06:11.569 ******* 2025-02-10 09:31:30.880189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.880194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.880200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.880205 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880210 | orchestrator | 2025-02-10 09:31:30.880216 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-02-10 09:31:30.880221 | orchestrator | Monday 10 February 2025 09:23:19 +0000 (0:00:01.668) 0:06:13.237 ******* 2025-02-10 09:31:30.880226 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880231 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880237 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880242 | orchestrator | 2025-02-10 09:31:30.880247 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.880253 | orchestrator | Monday 10 February 2025 09:23:20 +0000 (0:00:00.480) 0:06:13.718 ******* 2025-02-10 09:31:30.880258 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.880263 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.880268 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.880274 | orchestrator | 2025-02-10 09:31:30.880279 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-02-10 09:31:30.880284 | orchestrator | 2025-02-10 09:31:30.880289 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.880295 | orchestrator | Monday 10 February 2025 09:23:23 +0000 (0:00:02.796) 0:06:16.515 ******* 2025-02-10 09:31:30.880300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.880306 | orchestrator | 2025-02-10 09:31:30.880311 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.880316 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.971) 0:06:17.487 ******* 2025-02-10 09:31:30.880322 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880327 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880332 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880338 | orchestrator | 2025-02-10 09:31:30.880343 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.880366 | orchestrator | Monday 10 February 2025 09:23:24 +0000 (0:00:00.840) 0:06:18.327 ******* 2025-02-10 09:31:30.880372 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880377 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880382 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880388 | orchestrator | 2025-02-10 09:31:30.880393 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.880398 | orchestrator | Monday 10 February 2025 09:23:25 +0000 (0:00:00.420) 0:06:18.747 ******* 2025-02-10 09:31:30.880404 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880409 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880414 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880419 | orchestrator | 2025-02-10 09:31:30.880424 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.880428 | orchestrator | Monday 10 February 2025 09:23:26 +0000 (0:00:00.656) 0:06:19.404 ******* 2025-02-10 09:31:30.880433 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880438 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880443 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880448 | orchestrator | 2025-02-10 09:31:30.880453 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.880458 | orchestrator | Monday 10 February 2025 09:23:26 +0000 (0:00:00.361) 0:06:19.765 ******* 2025-02-10 09:31:30.880466 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880470 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880475 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880480 | orchestrator | 2025-02-10 09:31:30.880485 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.880490 | orchestrator | Monday 10 February 2025 09:23:27 +0000 (0:00:00.839) 0:06:20.605 ******* 2025-02-10 09:31:30.880495 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880499 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880504 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880510 | orchestrator | 2025-02-10 09:31:30.880518 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.880526 | orchestrator | Monday 10 February 2025 09:23:27 +0000 (0:00:00.383) 0:06:20.989 ******* 2025-02-10 09:31:30.880534 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880549 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880557 | orchestrator | 2025-02-10 09:31:30.880565 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.880573 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:00.653) 0:06:21.643 ******* 2025-02-10 09:31:30.880581 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880588 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880596 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880603 | orchestrator | 2025-02-10 09:31:30.880611 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.880616 | orchestrator | Monday 10 February 2025 09:23:28 +0000 (0:00:00.365) 0:06:22.008 ******* 2025-02-10 09:31:30.880621 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880625 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880630 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880635 | orchestrator | 2025-02-10 09:31:30.880640 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.880645 | orchestrator | Monday 10 February 2025 09:23:29 +0000 (0:00:00.446) 0:06:22.455 ******* 2025-02-10 09:31:30.880649 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880654 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880659 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880664 | orchestrator | 2025-02-10 09:31:30.880668 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.880673 | orchestrator | Monday 10 February 2025 09:23:29 +0000 (0:00:00.378) 0:06:22.834 ******* 2025-02-10 09:31:30.880678 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880683 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880687 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880692 | orchestrator | 2025-02-10 09:31:30.880697 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.880705 | orchestrator | Monday 10 February 2025 09:23:30 +0000 (0:00:01.229) 0:06:24.064 ******* 2025-02-10 09:31:30.880710 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880715 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880719 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880724 | orchestrator | 2025-02-10 09:31:30.880729 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.880734 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:00.397) 0:06:24.461 ******* 2025-02-10 09:31:30.880739 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880743 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880748 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880753 | orchestrator | 2025-02-10 09:31:30.880758 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.880763 | orchestrator | Monday 10 February 2025 09:23:31 +0000 (0:00:00.521) 0:06:24.983 ******* 2025-02-10 09:31:30.880767 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880775 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880784 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880788 | orchestrator | 2025-02-10 09:31:30.880793 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.880798 | orchestrator | Monday 10 February 2025 09:23:32 +0000 (0:00:00.406) 0:06:25.389 ******* 2025-02-10 09:31:30.880803 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880808 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880812 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880817 | orchestrator | 2025-02-10 09:31:30.880822 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.880827 | orchestrator | Monday 10 February 2025 09:23:32 +0000 (0:00:00.701) 0:06:26.090 ******* 2025-02-10 09:31:30.880832 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880836 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880845 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880850 | orchestrator | 2025-02-10 09:31:30.880855 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.880878 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.390) 0:06:26.481 ******* 2025-02-10 09:31:30.880883 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880888 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880909 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880918 | orchestrator | 2025-02-10 09:31:30.880927 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.880936 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.383) 0:06:26.864 ******* 2025-02-10 09:31:30.880944 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.880952 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.880957 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.880962 | orchestrator | 2025-02-10 09:31:30.880967 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.880972 | orchestrator | Monday 10 February 2025 09:23:33 +0000 (0:00:00.462) 0:06:27.326 ******* 2025-02-10 09:31:30.880977 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.880981 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.880986 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.880991 | orchestrator | 2025-02-10 09:31:30.880996 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.881001 | orchestrator | Monday 10 February 2025 09:23:34 +0000 (0:00:00.808) 0:06:28.135 ******* 2025-02-10 09:31:30.881005 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.881010 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.881015 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.881020 | orchestrator | 2025-02-10 09:31:30.881025 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.881029 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:00.464) 0:06:28.599 ******* 2025-02-10 09:31:30.881034 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881039 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881044 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881049 | orchestrator | 2025-02-10 09:31:30.881053 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.881058 | orchestrator | Monday 10 February 2025 09:23:35 +0000 (0:00:00.432) 0:06:29.032 ******* 2025-02-10 09:31:30.881063 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881068 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881072 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881077 | orchestrator | 2025-02-10 09:31:30.881082 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.881087 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.404) 0:06:29.436 ******* 2025-02-10 09:31:30.881092 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881096 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881101 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881111 | orchestrator | 2025-02-10 09:31:30.881116 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.881121 | orchestrator | Monday 10 February 2025 09:23:36 +0000 (0:00:00.755) 0:06:30.191 ******* 2025-02-10 09:31:30.881126 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881131 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881136 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881141 | orchestrator | 2025-02-10 09:31:30.881145 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.881150 | orchestrator | Monday 10 February 2025 09:23:37 +0000 (0:00:00.449) 0:06:30.640 ******* 2025-02-10 09:31:30.881155 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881160 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881165 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881169 | orchestrator | 2025-02-10 09:31:30.881174 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.881179 | orchestrator | Monday 10 February 2025 09:23:37 +0000 (0:00:00.366) 0:06:31.007 ******* 2025-02-10 09:31:30.881184 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881189 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881193 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881198 | orchestrator | 2025-02-10 09:31:30.881203 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.881208 | orchestrator | Monday 10 February 2025 09:23:38 +0000 (0:00:00.695) 0:06:31.702 ******* 2025-02-10 09:31:30.881212 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881217 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881222 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881227 | orchestrator | 2025-02-10 09:31:30.881231 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.881240 | orchestrator | Monday 10 February 2025 09:23:38 +0000 (0:00:00.387) 0:06:32.089 ******* 2025-02-10 09:31:30.881244 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881249 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881254 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881259 | orchestrator | 2025-02-10 09:31:30.881264 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.881268 | orchestrator | Monday 10 February 2025 09:23:39 +0000 (0:00:00.402) 0:06:32.491 ******* 2025-02-10 09:31:30.881273 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881278 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881283 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881288 | orchestrator | 2025-02-10 09:31:30.881293 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.881297 | orchestrator | Monday 10 February 2025 09:23:39 +0000 (0:00:00.394) 0:06:32.886 ******* 2025-02-10 09:31:30.881303 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881307 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881312 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881317 | orchestrator | 2025-02-10 09:31:30.881322 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.881327 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:00.705) 0:06:33.592 ******* 2025-02-10 09:31:30.881332 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881339 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881359 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881365 | orchestrator | 2025-02-10 09:31:30.881370 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.881374 | orchestrator | Monday 10 February 2025 09:23:40 +0000 (0:00:00.435) 0:06:34.028 ******* 2025-02-10 09:31:30.881379 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881384 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881393 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881397 | orchestrator | 2025-02-10 09:31:30.881402 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.881407 | orchestrator | Monday 10 February 2025 09:23:41 +0000 (0:00:00.431) 0:06:34.459 ******* 2025-02-10 09:31:30.881412 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.881417 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.881422 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.881427 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.881432 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881436 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881441 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.881446 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.881451 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881456 | orchestrator | 2025-02-10 09:31:30.881460 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.881465 | orchestrator | Monday 10 February 2025 09:23:41 +0000 (0:00:00.459) 0:06:34.919 ******* 2025-02-10 09:31:30.881470 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:31:30.881475 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:31:30.881479 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881484 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:31:30.881489 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:31:30.881494 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881499 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:31:30.881504 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:31:30.881508 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881513 | orchestrator | 2025-02-10 09:31:30.881518 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.881523 | orchestrator | Monday 10 February 2025 09:23:42 +0000 (0:00:00.730) 0:06:35.649 ******* 2025-02-10 09:31:30.881528 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881532 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881537 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881542 | orchestrator | 2025-02-10 09:31:30.881547 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.881552 | orchestrator | Monday 10 February 2025 09:23:42 +0000 (0:00:00.409) 0:06:36.059 ******* 2025-02-10 09:31:30.881556 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881561 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881566 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881571 | orchestrator | 2025-02-10 09:31:30.881576 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.881581 | orchestrator | Monday 10 February 2025 09:23:43 +0000 (0:00:00.439) 0:06:36.498 ******* 2025-02-10 09:31:30.881585 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881590 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881595 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881600 | orchestrator | 2025-02-10 09:31:30.881605 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.881613 | orchestrator | Monday 10 February 2025 09:23:43 +0000 (0:00:00.387) 0:06:36.886 ******* 2025-02-10 09:31:30.881621 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881629 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881637 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881644 | orchestrator | 2025-02-10 09:31:30.881652 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.881659 | orchestrator | Monday 10 February 2025 09:23:44 +0000 (0:00:00.704) 0:06:37.590 ******* 2025-02-10 09:31:30.881671 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881679 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881687 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881693 | orchestrator | 2025-02-10 09:31:30.881698 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.881703 | orchestrator | Monday 10 February 2025 09:23:44 +0000 (0:00:00.409) 0:06:38.000 ******* 2025-02-10 09:31:30.881708 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881713 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881717 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881722 | orchestrator | 2025-02-10 09:31:30.881727 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.881732 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:00.425) 0:06:38.426 ******* 2025-02-10 09:31:30.881737 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.881741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.881746 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.881751 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881756 | orchestrator | 2025-02-10 09:31:30.881761 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.881768 | orchestrator | Monday 10 February 2025 09:23:45 +0000 (0:00:00.488) 0:06:38.914 ******* 2025-02-10 09:31:30.881776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.881783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.881790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.881798 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881806 | orchestrator | 2025-02-10 09:31:30.881834 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.881840 | orchestrator | Monday 10 February 2025 09:23:46 +0000 (0:00:00.510) 0:06:39.425 ******* 2025-02-10 09:31:30.881845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.881850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.881855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.881860 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881868 | orchestrator | 2025-02-10 09:31:30.881876 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.881884 | orchestrator | Monday 10 February 2025 09:23:46 +0000 (0:00:00.768) 0:06:40.193 ******* 2025-02-10 09:31:30.881891 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881914 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881922 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881930 | orchestrator | 2025-02-10 09:31:30.881938 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.881943 | orchestrator | Monday 10 February 2025 09:23:47 +0000 (0:00:00.729) 0:06:40.923 ******* 2025-02-10 09:31:30.881948 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.881954 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.881962 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.881971 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.881978 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.881986 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.881995 | orchestrator | 2025-02-10 09:31:30.882003 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.882050 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:00.592) 0:06:41.515 ******* 2025-02-10 09:31:30.882060 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882069 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882077 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882090 | orchestrator | 2025-02-10 09:31:30.882099 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.882107 | orchestrator | Monday 10 February 2025 09:23:48 +0000 (0:00:00.439) 0:06:41.954 ******* 2025-02-10 09:31:30.882114 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882122 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882130 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882138 | orchestrator | 2025-02-10 09:31:30.882146 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.882154 | orchestrator | Monday 10 February 2025 09:23:49 +0000 (0:00:00.434) 0:06:42.389 ******* 2025-02-10 09:31:30.882162 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.882170 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882178 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.882185 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882190 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.882195 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882200 | orchestrator | 2025-02-10 09:31:30.882205 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.882209 | orchestrator | Monday 10 February 2025 09:23:49 +0000 (0:00:00.943) 0:06:43.332 ******* 2025-02-10 09:31:30.882214 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882219 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882225 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882233 | orchestrator | 2025-02-10 09:31:30.882241 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.882248 | orchestrator | Monday 10 February 2025 09:23:50 +0000 (0:00:00.465) 0:06:43.798 ******* 2025-02-10 09:31:30.882255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.882263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.882271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.882279 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:31:30.882302 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:31:30.882309 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:31:30.882317 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882325 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:31:30.882336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:31:30.882344 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:31:30.882351 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882359 | orchestrator | 2025-02-10 09:31:30.882367 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.882375 | orchestrator | Monday 10 February 2025 09:23:51 +0000 (0:00:00.706) 0:06:44.505 ******* 2025-02-10 09:31:30.882383 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882390 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882398 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882406 | orchestrator | 2025-02-10 09:31:30.882413 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.882421 | orchestrator | Monday 10 February 2025 09:23:52 +0000 (0:00:00.978) 0:06:45.484 ******* 2025-02-10 09:31:30.882428 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882436 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882444 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882451 | orchestrator | 2025-02-10 09:31:30.882460 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.882468 | orchestrator | Monday 10 February 2025 09:23:52 +0000 (0:00:00.667) 0:06:46.152 ******* 2025-02-10 09:31:30.882475 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882492 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882500 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882508 | orchestrator | 2025-02-10 09:31:30.882516 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.882549 | orchestrator | Monday 10 February 2025 09:23:53 +0000 (0:00:00.976) 0:06:47.128 ******* 2025-02-10 09:31:30.882557 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882565 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882573 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882581 | orchestrator | 2025-02-10 09:31:30.882589 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-02-10 09:31:30.882601 | orchestrator | Monday 10 February 2025 09:23:54 +0000 (0:00:00.738) 0:06:47.866 ******* 2025-02-10 09:31:30.882609 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:31:30.882617 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:31:30.882626 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:31:30.882631 | orchestrator | 2025-02-10 09:31:30.882636 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-02-10 09:31:30.882641 | orchestrator | Monday 10 February 2025 09:23:55 +0000 (0:00:01.255) 0:06:49.121 ******* 2025-02-10 09:31:30.882645 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.882651 | orchestrator | 2025-02-10 09:31:30.882656 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-02-10 09:31:30.882660 | orchestrator | Monday 10 February 2025 09:23:56 +0000 (0:00:00.613) 0:06:49.735 ******* 2025-02-10 09:31:30.882665 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.882670 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.882675 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.882680 | orchestrator | 2025-02-10 09:31:30.882685 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-02-10 09:31:30.882690 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:00.714) 0:06:50.449 ******* 2025-02-10 09:31:30.882698 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882706 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.882713 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.882721 | orchestrator | 2025-02-10 09:31:30.882729 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-02-10 09:31:30.882737 | orchestrator | Monday 10 February 2025 09:23:57 +0000 (0:00:00.679) 0:06:51.129 ******* 2025-02-10 09:31:30.882746 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:31:30.882751 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:31:30.882759 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:31:30.882764 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-02-10 09:31:30.882769 | orchestrator | 2025-02-10 09:31:30.882774 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-02-10 09:31:30.882778 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:08.270) 0:06:59.399 ******* 2025-02-10 09:31:30.882783 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.882788 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.882793 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.882798 | orchestrator | 2025-02-10 09:31:30.882803 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-02-10 09:31:30.882810 | orchestrator | Monday 10 February 2025 09:24:06 +0000 (0:00:00.675) 0:07:00.074 ******* 2025-02-10 09:31:30.882815 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-10 09:31:30.882820 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:31:30.882825 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:31:30.882830 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-10 09:31:30.882838 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:31:30.882851 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:31:30.882859 | orchestrator | 2025-02-10 09:31:30.882867 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-02-10 09:31:30.882875 | orchestrator | Monday 10 February 2025 09:24:08 +0000 (0:00:02.057) 0:07:02.132 ******* 2025-02-10 09:31:30.882883 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-10 09:31:30.882891 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:31:30.882927 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:31:30.882933 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:31:30.882938 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-10 09:31:30.882943 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-10 09:31:30.882948 | orchestrator | 2025-02-10 09:31:30.882953 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-02-10 09:31:30.882958 | orchestrator | Monday 10 February 2025 09:24:10 +0000 (0:00:01.358) 0:07:03.490 ******* 2025-02-10 09:31:30.882962 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.882967 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.882972 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.882977 | orchestrator | 2025-02-10 09:31:30.882982 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-02-10 09:31:30.882987 | orchestrator | Monday 10 February 2025 09:24:11 +0000 (0:00:01.052) 0:07:04.543 ******* 2025-02-10 09:31:30.882992 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.882997 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.883002 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.883006 | orchestrator | 2025-02-10 09:31:30.883011 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-02-10 09:31:30.883016 | orchestrator | Monday 10 February 2025 09:24:11 +0000 (0:00:00.407) 0:07:04.950 ******* 2025-02-10 09:31:30.883021 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.883026 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.883031 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.883038 | orchestrator | 2025-02-10 09:31:30.883046 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-02-10 09:31:30.883054 | orchestrator | Monday 10 February 2025 09:24:11 +0000 (0:00:00.370) 0:07:05.320 ******* 2025-02-10 09:31:30.883085 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.883094 | orchestrator | 2025-02-10 09:31:30.883102 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-02-10 09:31:30.883110 | orchestrator | Monday 10 February 2025 09:24:12 +0000 (0:00:00.882) 0:07:06.203 ******* 2025-02-10 09:31:30.883118 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.883126 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.883134 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.883142 | orchestrator | 2025-02-10 09:31:30.883150 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-02-10 09:31:30.883158 | orchestrator | Monday 10 February 2025 09:24:13 +0000 (0:00:00.454) 0:07:06.657 ******* 2025-02-10 09:31:30.883166 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.883175 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.883180 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.883185 | orchestrator | 2025-02-10 09:31:30.883190 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-02-10 09:31:30.883194 | orchestrator | Monday 10 February 2025 09:24:13 +0000 (0:00:00.431) 0:07:07.088 ******* 2025-02-10 09:31:30.883203 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.883208 | orchestrator | 2025-02-10 09:31:30.883213 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-02-10 09:31:30.883222 | orchestrator | Monday 10 February 2025 09:24:14 +0000 (0:00:00.921) 0:07:08.010 ******* 2025-02-10 09:31:30.883227 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883232 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883237 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883242 | orchestrator | 2025-02-10 09:31:30.883247 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-02-10 09:31:30.883251 | orchestrator | Monday 10 February 2025 09:24:16 +0000 (0:00:01.349) 0:07:09.360 ******* 2025-02-10 09:31:30.883256 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883261 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883266 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883271 | orchestrator | 2025-02-10 09:31:30.883275 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-02-10 09:31:30.883280 | orchestrator | Monday 10 February 2025 09:24:17 +0000 (0:00:01.248) 0:07:10.608 ******* 2025-02-10 09:31:30.883285 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883290 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883294 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883299 | orchestrator | 2025-02-10 09:31:30.883304 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-02-10 09:31:30.883309 | orchestrator | Monday 10 February 2025 09:24:19 +0000 (0:00:02.270) 0:07:12.879 ******* 2025-02-10 09:31:30.883314 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883318 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883323 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883329 | orchestrator | 2025-02-10 09:31:30.883337 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-02-10 09:31:30.883345 | orchestrator | Monday 10 February 2025 09:24:21 +0000 (0:00:02.143) 0:07:15.023 ******* 2025-02-10 09:31:30.883352 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.883365 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.883373 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-02-10 09:31:30.883382 | orchestrator | 2025-02-10 09:31:30.883387 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-02-10 09:31:30.883392 | orchestrator | Monday 10 February 2025 09:24:22 +0000 (0:00:00.692) 0:07:15.716 ******* 2025-02-10 09:31:30.883397 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-02-10 09:31:30.883402 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-02-10 09:31:30.883407 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.883412 | orchestrator | 2025-02-10 09:31:30.883419 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-02-10 09:31:30.883425 | orchestrator | Monday 10 February 2025 09:24:36 +0000 (0:00:14.105) 0:07:29.822 ******* 2025-02-10 09:31:30.883430 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.883435 | orchestrator | 2025-02-10 09:31:30.883440 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-02-10 09:31:30.883444 | orchestrator | Monday 10 February 2025 09:24:38 +0000 (0:00:01.611) 0:07:31.433 ******* 2025-02-10 09:31:30.883449 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.883454 | orchestrator | 2025-02-10 09:31:30.883459 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-02-10 09:31:30.883464 | orchestrator | Monday 10 February 2025 09:24:38 +0000 (0:00:00.478) 0:07:31.912 ******* 2025-02-10 09:31:30.883469 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.883473 | orchestrator | 2025-02-10 09:31:30.883478 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-02-10 09:31:30.883483 | orchestrator | Monday 10 February 2025 09:24:38 +0000 (0:00:00.309) 0:07:32.222 ******* 2025-02-10 09:31:30.883487 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-02-10 09:31:30.883499 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-02-10 09:31:30.883504 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-02-10 09:31:30.883508 | orchestrator | 2025-02-10 09:31:30.883513 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-02-10 09:31:30.883518 | orchestrator | Monday 10 February 2025 09:24:45 +0000 (0:00:07.059) 0:07:39.281 ******* 2025-02-10 09:31:30.883523 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-02-10 09:31:30.883546 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-02-10 09:31:30.883552 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-02-10 09:31:30.883559 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-02-10 09:31:30.883567 | orchestrator | 2025-02-10 09:31:30.883575 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:31:30.883582 | orchestrator | Monday 10 February 2025 09:24:51 +0000 (0:00:05.817) 0:07:45.098 ******* 2025-02-10 09:31:30.883589 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883598 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883605 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883613 | orchestrator | 2025-02-10 09:31:30.883621 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-10 09:31:30.883628 | orchestrator | Monday 10 February 2025 09:24:52 +0000 (0:00:01.010) 0:07:46.108 ******* 2025-02-10 09:31:30.883637 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.883645 | orchestrator | 2025-02-10 09:31:30.883653 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-10 09:31:30.883661 | orchestrator | Monday 10 February 2025 09:24:53 +0000 (0:00:00.694) 0:07:46.803 ******* 2025-02-10 09:31:30.883670 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.883679 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.883687 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.883695 | orchestrator | 2025-02-10 09:31:30.883700 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-10 09:31:30.883705 | orchestrator | Monday 10 February 2025 09:24:53 +0000 (0:00:00.379) 0:07:47.183 ******* 2025-02-10 09:31:30.883710 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883715 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883720 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883725 | orchestrator | 2025-02-10 09:31:30.883730 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-10 09:31:30.883735 | orchestrator | Monday 10 February 2025 09:24:55 +0000 (0:00:01.294) 0:07:48.477 ******* 2025-02-10 09:31:30.883739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:31:30.883744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:31:30.883749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:31:30.883754 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.883758 | orchestrator | 2025-02-10 09:31:30.883763 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-10 09:31:30.883768 | orchestrator | Monday 10 February 2025 09:24:55 +0000 (0:00:00.801) 0:07:49.278 ******* 2025-02-10 09:31:30.883773 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.883781 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.883789 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.883796 | orchestrator | 2025-02-10 09:31:30.883804 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.883812 | orchestrator | Monday 10 February 2025 09:24:56 +0000 (0:00:00.534) 0:07:49.812 ******* 2025-02-10 09:31:30.883820 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.883828 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.883836 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.883846 | orchestrator | 2025-02-10 09:31:30.883851 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-02-10 09:31:30.883855 | orchestrator | 2025-02-10 09:31:30.883860 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.883868 | orchestrator | Monday 10 February 2025 09:24:59 +0000 (0:00:02.746) 0:07:52.559 ******* 2025-02-10 09:31:30.883873 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.883878 | orchestrator | 2025-02-10 09:31:30.883883 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.883888 | orchestrator | Monday 10 February 2025 09:24:59 +0000 (0:00:00.586) 0:07:53.145 ******* 2025-02-10 09:31:30.883893 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.883913 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.883918 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.883923 | orchestrator | 2025-02-10 09:31:30.883928 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.883933 | orchestrator | Monday 10 February 2025 09:25:00 +0000 (0:00:00.349) 0:07:53.495 ******* 2025-02-10 09:31:30.883938 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.883943 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.883947 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.883952 | orchestrator | 2025-02-10 09:31:30.883957 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.883962 | orchestrator | Monday 10 February 2025 09:25:01 +0000 (0:00:01.145) 0:07:54.640 ******* 2025-02-10 09:31:30.883967 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.883971 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.883976 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.883981 | orchestrator | 2025-02-10 09:31:30.883986 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.883990 | orchestrator | Monday 10 February 2025 09:25:02 +0000 (0:00:00.788) 0:07:55.429 ******* 2025-02-10 09:31:30.883995 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.884000 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.884006 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.884014 | orchestrator | 2025-02-10 09:31:30.884021 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.884028 | orchestrator | Monday 10 February 2025 09:25:02 +0000 (0:00:00.771) 0:07:56.200 ******* 2025-02-10 09:31:30.884036 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884043 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884051 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884058 | orchestrator | 2025-02-10 09:31:30.884066 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.884074 | orchestrator | Monday 10 February 2025 09:25:03 +0000 (0:00:00.364) 0:07:56.565 ******* 2025-02-10 09:31:30.884103 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884108 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884113 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884118 | orchestrator | 2025-02-10 09:31:30.884123 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.884128 | orchestrator | Monday 10 February 2025 09:25:03 +0000 (0:00:00.642) 0:07:57.207 ******* 2025-02-10 09:31:30.884132 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884137 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884142 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884147 | orchestrator | 2025-02-10 09:31:30.884152 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.884161 | orchestrator | Monday 10 February 2025 09:25:04 +0000 (0:00:00.412) 0:07:57.619 ******* 2025-02-10 09:31:30.884169 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884176 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884184 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884198 | orchestrator | 2025-02-10 09:31:30.884206 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.884213 | orchestrator | Monday 10 February 2025 09:25:04 +0000 (0:00:00.355) 0:07:57.974 ******* 2025-02-10 09:31:30.884221 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884228 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884236 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884243 | orchestrator | 2025-02-10 09:31:30.884251 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.884260 | orchestrator | Monday 10 February 2025 09:25:04 +0000 (0:00:00.356) 0:07:58.331 ******* 2025-02-10 09:31:30.884265 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884270 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884275 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884279 | orchestrator | 2025-02-10 09:31:30.884286 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.884294 | orchestrator | Monday 10 February 2025 09:25:05 +0000 (0:00:00.680) 0:07:59.012 ******* 2025-02-10 09:31:30.884302 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.884318 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.884326 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.884335 | orchestrator | 2025-02-10 09:31:30.884340 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.884345 | orchestrator | Monday 10 February 2025 09:25:06 +0000 (0:00:00.746) 0:07:59.759 ******* 2025-02-10 09:31:30.884349 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884354 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884359 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884364 | orchestrator | 2025-02-10 09:31:30.884369 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.884373 | orchestrator | Monday 10 February 2025 09:25:06 +0000 (0:00:00.340) 0:08:00.099 ******* 2025-02-10 09:31:30.884378 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884383 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884388 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884392 | orchestrator | 2025-02-10 09:31:30.884397 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.884402 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:00.374) 0:08:00.474 ******* 2025-02-10 09:31:30.884407 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.884412 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.884416 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.884421 | orchestrator | 2025-02-10 09:31:30.884426 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.884431 | orchestrator | Monday 10 February 2025 09:25:07 +0000 (0:00:00.661) 0:08:01.136 ******* 2025-02-10 09:31:30.884435 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.884440 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.884445 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.884450 | orchestrator | 2025-02-10 09:31:30.884455 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.884465 | orchestrator | Monday 10 February 2025 09:25:08 +0000 (0:00:00.369) 0:08:01.505 ******* 2025-02-10 09:31:30.884470 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.884474 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.884479 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.884484 | orchestrator | 2025-02-10 09:31:30.884489 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.884494 | orchestrator | Monday 10 February 2025 09:25:08 +0000 (0:00:00.432) 0:08:01.938 ******* 2025-02-10 09:31:30.884498 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884505 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884513 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884520 | orchestrator | 2025-02-10 09:31:30.884528 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.884541 | orchestrator | Monday 10 February 2025 09:25:08 +0000 (0:00:00.346) 0:08:02.285 ******* 2025-02-10 09:31:30.884549 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884557 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884565 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884571 | orchestrator | 2025-02-10 09:31:30.884580 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.884587 | orchestrator | Monday 10 February 2025 09:25:09 +0000 (0:00:00.638) 0:08:02.923 ******* 2025-02-10 09:31:30.884595 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884602 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884610 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884617 | orchestrator | 2025-02-10 09:31:30.884625 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.884633 | orchestrator | Monday 10 February 2025 09:25:09 +0000 (0:00:00.353) 0:08:03.277 ******* 2025-02-10 09:31:30.884641 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.884650 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.884655 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.884660 | orchestrator | 2025-02-10 09:31:30.884665 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.884670 | orchestrator | Monday 10 February 2025 09:25:10 +0000 (0:00:00.362) 0:08:03.639 ******* 2025-02-10 09:31:30.884674 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884711 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884722 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884730 | orchestrator | 2025-02-10 09:31:30.884737 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.884744 | orchestrator | Monday 10 February 2025 09:25:10 +0000 (0:00:00.363) 0:08:04.003 ******* 2025-02-10 09:31:30.884751 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884759 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884767 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884775 | orchestrator | 2025-02-10 09:31:30.884783 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.884791 | orchestrator | Monday 10 February 2025 09:25:11 +0000 (0:00:00.672) 0:08:04.675 ******* 2025-02-10 09:31:30.884799 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884806 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884814 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884822 | orchestrator | 2025-02-10 09:31:30.884830 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.884838 | orchestrator | Monday 10 February 2025 09:25:11 +0000 (0:00:00.397) 0:08:05.073 ******* 2025-02-10 09:31:30.884846 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884853 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884861 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884870 | orchestrator | 2025-02-10 09:31:30.884878 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.884886 | orchestrator | Monday 10 February 2025 09:25:12 +0000 (0:00:00.358) 0:08:05.431 ******* 2025-02-10 09:31:30.884906 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884912 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884920 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884928 | orchestrator | 2025-02-10 09:31:30.884935 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.884943 | orchestrator | Monday 10 February 2025 09:25:12 +0000 (0:00:00.349) 0:08:05.781 ******* 2025-02-10 09:31:30.884951 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.884959 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.884967 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.884975 | orchestrator | 2025-02-10 09:31:30.884983 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.885000 | orchestrator | Monday 10 February 2025 09:25:13 +0000 (0:00:00.659) 0:08:06.441 ******* 2025-02-10 09:31:30.885006 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885014 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885022 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885030 | orchestrator | 2025-02-10 09:31:30.885038 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.885047 | orchestrator | Monday 10 February 2025 09:25:13 +0000 (0:00:00.409) 0:08:06.850 ******* 2025-02-10 09:31:30.885055 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885068 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885076 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885084 | orchestrator | 2025-02-10 09:31:30.885092 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.885100 | orchestrator | Monday 10 February 2025 09:25:13 +0000 (0:00:00.342) 0:08:07.193 ******* 2025-02-10 09:31:30.885107 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885115 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885123 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885131 | orchestrator | 2025-02-10 09:31:30.885139 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.885147 | orchestrator | Monday 10 February 2025 09:25:14 +0000 (0:00:00.373) 0:08:07.566 ******* 2025-02-10 09:31:30.885155 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885163 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885171 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885179 | orchestrator | 2025-02-10 09:31:30.885187 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.885194 | orchestrator | Monday 10 February 2025 09:25:14 +0000 (0:00:00.638) 0:08:08.204 ******* 2025-02-10 09:31:30.885203 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885210 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885218 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885225 | orchestrator | 2025-02-10 09:31:30.885232 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.885240 | orchestrator | Monday 10 February 2025 09:25:15 +0000 (0:00:00.376) 0:08:08.581 ******* 2025-02-10 09:31:30.885248 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885256 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885264 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885271 | orchestrator | 2025-02-10 09:31:30.885279 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.885288 | orchestrator | Monday 10 February 2025 09:25:15 +0000 (0:00:00.395) 0:08:08.976 ******* 2025-02-10 09:31:30.885296 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.885304 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.885312 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.885320 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.885327 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885335 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885343 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.885351 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.885359 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885367 | orchestrator | 2025-02-10 09:31:30.885375 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.885387 | orchestrator | Monday 10 February 2025 09:25:16 +0000 (0:00:00.397) 0:08:09.374 ******* 2025-02-10 09:31:30.885395 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:31:30.885403 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:31:30.885439 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885455 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:31:30.885463 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:31:30.885471 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885480 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:31:30.885487 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:31:30.885495 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885503 | orchestrator | 2025-02-10 09:31:30.885511 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.885519 | orchestrator | Monday 10 February 2025 09:25:16 +0000 (0:00:00.720) 0:08:10.094 ******* 2025-02-10 09:31:30.885527 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885535 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885543 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885551 | orchestrator | 2025-02-10 09:31:30.885559 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.885567 | orchestrator | Monday 10 February 2025 09:25:17 +0000 (0:00:00.366) 0:08:10.461 ******* 2025-02-10 09:31:30.885575 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885583 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885591 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885598 | orchestrator | 2025-02-10 09:31:30.885606 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.885613 | orchestrator | Monday 10 February 2025 09:25:17 +0000 (0:00:00.417) 0:08:10.878 ******* 2025-02-10 09:31:30.885621 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885629 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885637 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885645 | orchestrator | 2025-02-10 09:31:30.885653 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.885661 | orchestrator | Monday 10 February 2025 09:25:17 +0000 (0:00:00.383) 0:08:11.261 ******* 2025-02-10 09:31:30.885669 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885677 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885685 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885693 | orchestrator | 2025-02-10 09:31:30.885701 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.885709 | orchestrator | Monday 10 February 2025 09:25:18 +0000 (0:00:00.729) 0:08:11.991 ******* 2025-02-10 09:31:30.885717 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885725 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885733 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885741 | orchestrator | 2025-02-10 09:31:30.885749 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.885757 | orchestrator | Monday 10 February 2025 09:25:19 +0000 (0:00:00.453) 0:08:12.444 ******* 2025-02-10 09:31:30.885764 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885772 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.885780 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.885788 | orchestrator | 2025-02-10 09:31:30.885796 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.885804 | orchestrator | Monday 10 February 2025 09:25:19 +0000 (0:00:00.490) 0:08:12.935 ******* 2025-02-10 09:31:30.885812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.885820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.885828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.885836 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885843 | orchestrator | 2025-02-10 09:31:30.885851 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.885859 | orchestrator | Monday 10 February 2025 09:25:20 +0000 (0:00:00.546) 0:08:13.481 ******* 2025-02-10 09:31:30.885872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.885880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.885888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.885930 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885940 | orchestrator | 2025-02-10 09:31:30.885948 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.885956 | orchestrator | Monday 10 February 2025 09:25:20 +0000 (0:00:00.531) 0:08:14.012 ******* 2025-02-10 09:31:30.885965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.885973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.885981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.885988 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.885997 | orchestrator | 2025-02-10 09:31:30.886004 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.886035 | orchestrator | Monday 10 February 2025 09:25:21 +0000 (0:00:00.797) 0:08:14.809 ******* 2025-02-10 09:31:30.886044 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886052 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886059 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886066 | orchestrator | 2025-02-10 09:31:30.886074 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.886082 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:00.867) 0:08:15.677 ******* 2025-02-10 09:31:30.886089 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.886097 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886105 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.886113 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886121 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.886129 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886137 | orchestrator | 2025-02-10 09:31:30.886146 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.886184 | orchestrator | Monday 10 February 2025 09:25:22 +0000 (0:00:00.602) 0:08:16.280 ******* 2025-02-10 09:31:30.886192 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886200 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886208 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886216 | orchestrator | 2025-02-10 09:31:30.886224 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.886233 | orchestrator | Monday 10 February 2025 09:25:23 +0000 (0:00:00.374) 0:08:16.655 ******* 2025-02-10 09:31:30.886240 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886254 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886262 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886269 | orchestrator | 2025-02-10 09:31:30.886277 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.886285 | orchestrator | Monday 10 February 2025 09:25:23 +0000 (0:00:00.369) 0:08:17.024 ******* 2025-02-10 09:31:30.886292 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.886300 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886309 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.886316 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886323 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.886330 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886338 | orchestrator | 2025-02-10 09:31:30.886345 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.886352 | orchestrator | Monday 10 February 2025 09:25:24 +0000 (0:00:01.067) 0:08:18.091 ******* 2025-02-10 09:31:30.886365 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.886382 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886390 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.886399 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886407 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.886415 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886423 | orchestrator | 2025-02-10 09:31:30.886431 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.886439 | orchestrator | Monday 10 February 2025 09:25:25 +0000 (0:00:00.401) 0:08:18.493 ******* 2025-02-10 09:31:30.886447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.886459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.886467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.886475 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.886483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.886491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.886499 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886506 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.886522 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.886530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.886538 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886546 | orchestrator | 2025-02-10 09:31:30.886553 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.886562 | orchestrator | Monday 10 February 2025 09:25:25 +0000 (0:00:00.834) 0:08:19.328 ******* 2025-02-10 09:31:30.886569 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886577 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886585 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886590 | orchestrator | 2025-02-10 09:31:30.886595 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.886600 | orchestrator | Monday 10 February 2025 09:25:26 +0000 (0:00:00.945) 0:08:20.273 ******* 2025-02-10 09:31:30.886604 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.886609 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886614 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.886619 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886624 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.886629 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886633 | orchestrator | 2025-02-10 09:31:30.886638 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.886643 | orchestrator | Monday 10 February 2025 09:25:27 +0000 (0:00:00.633) 0:08:20.907 ******* 2025-02-10 09:31:30.886648 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886653 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886661 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886668 | orchestrator | 2025-02-10 09:31:30.886676 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.886685 | orchestrator | Monday 10 February 2025 09:25:28 +0000 (0:00:00.941) 0:08:21.849 ******* 2025-02-10 09:31:30.886690 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886695 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886700 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886705 | orchestrator | 2025-02-10 09:31:30.886709 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-02-10 09:31:30.886714 | orchestrator | Monday 10 February 2025 09:25:29 +0000 (0:00:00.603) 0:08:22.452 ******* 2025-02-10 09:31:30.886725 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.886730 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.886734 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.886739 | orchestrator | 2025-02-10 09:31:30.886744 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-02-10 09:31:30.886774 | orchestrator | Monday 10 February 2025 09:25:29 +0000 (0:00:00.657) 0:08:23.110 ******* 2025-02-10 09:31:30.886779 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:31:30.886784 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:31:30.886789 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:31:30.886794 | orchestrator | 2025-02-10 09:31:30.886799 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-02-10 09:31:30.886804 | orchestrator | Monday 10 February 2025 09:25:30 +0000 (0:00:00.796) 0:08:23.906 ******* 2025-02-10 09:31:30.886808 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.886814 | orchestrator | 2025-02-10 09:31:30.886818 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-02-10 09:31:30.886823 | orchestrator | Monday 10 February 2025 09:25:31 +0000 (0:00:00.645) 0:08:24.552 ******* 2025-02-10 09:31:30.886828 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886833 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886838 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886843 | orchestrator | 2025-02-10 09:31:30.886848 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-02-10 09:31:30.886852 | orchestrator | Monday 10 February 2025 09:25:32 +0000 (0:00:00.799) 0:08:25.351 ******* 2025-02-10 09:31:30.886857 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886862 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886867 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886872 | orchestrator | 2025-02-10 09:31:30.886877 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-02-10 09:31:30.886881 | orchestrator | Monday 10 February 2025 09:25:32 +0000 (0:00:00.457) 0:08:25.808 ******* 2025-02-10 09:31:30.886886 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886891 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886913 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886918 | orchestrator | 2025-02-10 09:31:30.886923 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-02-10 09:31:30.886930 | orchestrator | Monday 10 February 2025 09:25:32 +0000 (0:00:00.361) 0:08:26.170 ******* 2025-02-10 09:31:30.886938 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.886945 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.886952 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.886960 | orchestrator | 2025-02-10 09:31:30.886967 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-02-10 09:31:30.886975 | orchestrator | Monday 10 February 2025 09:25:33 +0000 (0:00:00.350) 0:08:26.521 ******* 2025-02-10 09:31:30.886982 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.886990 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.886998 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.887003 | orchestrator | 2025-02-10 09:31:30.887008 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-02-10 09:31:30.887013 | orchestrator | Monday 10 February 2025 09:25:34 +0000 (0:00:01.000) 0:08:27.521 ******* 2025-02-10 09:31:30.887017 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.887022 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.887027 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.887032 | orchestrator | 2025-02-10 09:31:30.887037 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-02-10 09:31:30.887042 | orchestrator | Monday 10 February 2025 09:25:34 +0000 (0:00:00.464) 0:08:27.986 ******* 2025-02-10 09:31:30.887051 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-10 09:31:30.887056 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-10 09:31:30.887065 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-10 09:31:30.887070 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-10 09:31:30.887075 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-10 09:31:30.887080 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-10 09:31:30.887084 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-10 09:31:30.887089 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-10 09:31:30.887094 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-10 09:31:30.887099 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-10 09:31:30.887104 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-10 09:31:30.887108 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-10 09:31:30.887113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-10 09:31:30.887118 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-10 09:31:30.887123 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-10 09:31:30.887128 | orchestrator | 2025-02-10 09:31:30.887132 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-02-10 09:31:30.887137 | orchestrator | Monday 10 February 2025 09:25:39 +0000 (0:00:04.465) 0:08:32.451 ******* 2025-02-10 09:31:30.887142 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.887147 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.887152 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.887156 | orchestrator | 2025-02-10 09:31:30.887180 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-02-10 09:31:30.887186 | orchestrator | Monday 10 February 2025 09:25:39 +0000 (0:00:00.584) 0:08:33.036 ******* 2025-02-10 09:31:30.887191 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.887195 | orchestrator | 2025-02-10 09:31:30.887202 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-02-10 09:31:30.887210 | orchestrator | Monday 10 February 2025 09:25:40 +0000 (0:00:00.659) 0:08:33.695 ******* 2025-02-10 09:31:30.887217 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-10 09:31:30.887224 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-10 09:31:30.887231 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-10 09:31:30.887239 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-02-10 09:31:30.887248 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-02-10 09:31:30.887253 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-02-10 09:31:30.887258 | orchestrator | 2025-02-10 09:31:30.887263 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-02-10 09:31:30.887268 | orchestrator | Monday 10 February 2025 09:25:41 +0000 (0:00:01.159) 0:08:34.855 ******* 2025-02-10 09:31:30.887273 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:31:30.887277 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.887282 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:31:30.887294 | orchestrator | 2025-02-10 09:31:30.887299 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-02-10 09:31:30.887304 | orchestrator | Monday 10 February 2025 09:25:43 +0000 (0:00:02.116) 0:08:36.972 ******* 2025-02-10 09:31:30.887309 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:31:30.887314 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.887319 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.887324 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:31:30.887332 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:31:30.887340 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.887347 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.887355 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.887362 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.887369 | orchestrator | 2025-02-10 09:31:30.887376 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-02-10 09:31:30.887383 | orchestrator | Monday 10 February 2025 09:25:44 +0000 (0:00:01.285) 0:08:38.257 ******* 2025-02-10 09:31:30.887390 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.887397 | orchestrator | 2025-02-10 09:31:30.887404 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-02-10 09:31:30.887411 | orchestrator | Monday 10 February 2025 09:25:47 +0000 (0:00:02.463) 0:08:40.721 ******* 2025-02-10 09:31:30.887418 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.887426 | orchestrator | 2025-02-10 09:31:30.887433 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-02-10 09:31:30.887440 | orchestrator | Monday 10 February 2025 09:25:48 +0000 (0:00:00.884) 0:08:41.606 ******* 2025-02-10 09:31:30.887448 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.887461 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.887472 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.887481 | orchestrator | 2025-02-10 09:31:30.887490 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-02-10 09:31:30.887497 | orchestrator | Monday 10 February 2025 09:25:48 +0000 (0:00:00.365) 0:08:41.972 ******* 2025-02-10 09:31:30.887506 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.887511 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.887516 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.887521 | orchestrator | 2025-02-10 09:31:30.887527 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-02-10 09:31:30.887535 | orchestrator | Monday 10 February 2025 09:25:48 +0000 (0:00:00.365) 0:08:42.337 ******* 2025-02-10 09:31:30.887542 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.887550 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.887557 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.887566 | orchestrator | 2025-02-10 09:31:30.887574 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-02-10 09:31:30.887581 | orchestrator | Monday 10 February 2025 09:25:49 +0000 (0:00:00.374) 0:08:42.712 ******* 2025-02-10 09:31:30.887591 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.887596 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.887601 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.887606 | orchestrator | 2025-02-10 09:31:30.887615 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-02-10 09:31:30.887623 | orchestrator | Monday 10 February 2025 09:25:50 +0000 (0:00:00.730) 0:08:43.442 ******* 2025-02-10 09:31:30.887630 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.887638 | orchestrator | 2025-02-10 09:31:30.887645 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-02-10 09:31:30.887659 | orchestrator | Monday 10 February 2025 09:25:50 +0000 (0:00:00.730) 0:08:44.173 ******* 2025-02-10 09:31:30.887666 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c468f1bf-17d5-510b-8602-ed8efc51f14c', 'data_vg': 'ceph-c468f1bf-17d5-510b-8602-ed8efc51f14c'}) 2025-02-10 09:31:30.887706 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f024456c-4135-5029-bf0e-13fb105dc5b7', 'data_vg': 'ceph-f024456c-4135-5029-bf0e-13fb105dc5b7'}) 2025-02-10 09:31:30.887715 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8f95f397-c0f5-5bc9-9af0-9f577faebed9', 'data_vg': 'ceph-8f95f397-c0f5-5bc9-9af0-9f577faebed9'}) 2025-02-10 09:31:30.887723 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a3ebd317-95a0-5383-a134-14be01baa44d', 'data_vg': 'ceph-a3ebd317-95a0-5383-a134-14be01baa44d'}) 2025-02-10 09:31:30.887732 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9b75c92e-4993-5ff3-a16a-a182a58c3e6b', 'data_vg': 'ceph-9b75c92e-4993-5ff3-a16a-a182a58c3e6b'}) 2025-02-10 09:31:30.887739 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-204ceda1-8353-534a-a397-2ce8fe516c0b', 'data_vg': 'ceph-204ceda1-8353-534a-a397-2ce8fe516c0b'}) 2025-02-10 09:31:30.887747 | orchestrator | 2025-02-10 09:31:30.887755 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-02-10 09:31:30.887763 | orchestrator | Monday 10 February 2025 09:26:31 +0000 (0:00:40.316) 0:09:24.489 ******* 2025-02-10 09:31:30.887771 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.887779 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.887787 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.887795 | orchestrator | 2025-02-10 09:31:30.887803 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-02-10 09:31:30.887811 | orchestrator | Monday 10 February 2025 09:26:31 +0000 (0:00:00.499) 0:09:24.989 ******* 2025-02-10 09:31:30.887819 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.887827 | orchestrator | 2025-02-10 09:31:30.887835 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-02-10 09:31:30.887842 | orchestrator | Monday 10 February 2025 09:26:32 +0000 (0:00:00.730) 0:09:25.719 ******* 2025-02-10 09:31:30.887850 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.887858 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.887866 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.887875 | orchestrator | 2025-02-10 09:31:30.887883 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-02-10 09:31:30.887890 | orchestrator | Monday 10 February 2025 09:26:33 +0000 (0:00:00.750) 0:09:26.470 ******* 2025-02-10 09:31:30.887933 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.887942 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.887951 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.887960 | orchestrator | 2025-02-10 09:31:30.887968 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-02-10 09:31:30.887976 | orchestrator | Monday 10 February 2025 09:26:35 +0000 (0:00:02.121) 0:09:28.592 ******* 2025-02-10 09:31:30.887984 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.887993 | orchestrator | 2025-02-10 09:31:30.888002 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-02-10 09:31:30.888010 | orchestrator | Monday 10 February 2025 09:26:35 +0000 (0:00:00.738) 0:09:29.330 ******* 2025-02-10 09:31:30.888018 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.888027 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.888036 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.888044 | orchestrator | 2025-02-10 09:31:30.888053 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-02-10 09:31:30.888061 | orchestrator | Monday 10 February 2025 09:26:37 +0000 (0:00:01.352) 0:09:30.683 ******* 2025-02-10 09:31:30.888075 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.888084 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.888092 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.888101 | orchestrator | 2025-02-10 09:31:30.888108 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-02-10 09:31:30.888117 | orchestrator | Monday 10 February 2025 09:26:38 +0000 (0:00:01.529) 0:09:32.212 ******* 2025-02-10 09:31:30.888126 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.888134 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.888143 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.888151 | orchestrator | 2025-02-10 09:31:30.888159 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-02-10 09:31:30.888168 | orchestrator | Monday 10 February 2025 09:26:40 +0000 (0:00:01.924) 0:09:34.136 ******* 2025-02-10 09:31:30.888176 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888184 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888192 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.888201 | orchestrator | 2025-02-10 09:31:30.888209 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-02-10 09:31:30.888218 | orchestrator | Monday 10 February 2025 09:26:41 +0000 (0:00:00.367) 0:09:34.504 ******* 2025-02-10 09:31:30.888226 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888234 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888243 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.888251 | orchestrator | 2025-02-10 09:31:30.888259 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-02-10 09:31:30.888267 | orchestrator | Monday 10 February 2025 09:26:41 +0000 (0:00:00.685) 0:09:35.190 ******* 2025-02-10 09:31:30.888275 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-02-10 09:31:30.888284 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-02-10 09:31:30.888293 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:31:30.888301 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-02-10 09:31:30.888309 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-02-10 09:31:30.888318 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-02-10 09:31:30.888326 | orchestrator | 2025-02-10 09:31:30.888335 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-02-10 09:31:30.888367 | orchestrator | Monday 10 February 2025 09:26:42 +0000 (0:00:01.116) 0:09:36.307 ******* 2025-02-10 09:31:30.888377 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-02-10 09:31:30.888385 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-02-10 09:31:30.888393 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-02-10 09:31:30.888402 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-02-10 09:31:30.888410 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-02-10 09:31:30.888422 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-02-10 09:31:30.888430 | orchestrator | 2025-02-10 09:31:30.888438 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-02-10 09:31:30.888446 | orchestrator | Monday 10 February 2025 09:26:47 +0000 (0:00:04.136) 0:09:40.443 ******* 2025-02-10 09:31:30.888454 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888462 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888470 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.888478 | orchestrator | 2025-02-10 09:31:30.888486 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-02-10 09:31:30.888494 | orchestrator | Monday 10 February 2025 09:26:49 +0000 (0:00:02.537) 0:09:42.981 ******* 2025-02-10 09:31:30.888502 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888510 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888518 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-02-10 09:31:30.888525 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.888537 | orchestrator | 2025-02-10 09:31:30.888545 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-02-10 09:31:30.888553 | orchestrator | Monday 10 February 2025 09:27:02 +0000 (0:00:12.845) 0:09:55.826 ******* 2025-02-10 09:31:30.888561 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888569 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888577 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.888585 | orchestrator | 2025-02-10 09:31:30.888593 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-02-10 09:31:30.888600 | orchestrator | Monday 10 February 2025 09:27:03 +0000 (0:00:00.614) 0:09:56.440 ******* 2025-02-10 09:31:30.888608 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888616 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888624 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.888636 | orchestrator | 2025-02-10 09:31:30.888648 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:31:30.888656 | orchestrator | Monday 10 February 2025 09:27:04 +0000 (0:00:01.260) 0:09:57.701 ******* 2025-02-10 09:31:30.888665 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.888673 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.888681 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.888688 | orchestrator | 2025-02-10 09:31:30.888696 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-10 09:31:30.888704 | orchestrator | Monday 10 February 2025 09:27:05 +0000 (0:00:00.710) 0:09:58.411 ******* 2025-02-10 09:31:30.888712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.888720 | orchestrator | 2025-02-10 09:31:30.888728 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-02-10 09:31:30.888736 | orchestrator | Monday 10 February 2025 09:27:05 +0000 (0:00:00.879) 0:09:59.291 ******* 2025-02-10 09:31:30.888743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.888751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.888759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.888767 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888775 | orchestrator | 2025-02-10 09:31:30.888783 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-02-10 09:31:30.888791 | orchestrator | Monday 10 February 2025 09:27:06 +0000 (0:00:00.548) 0:09:59.840 ******* 2025-02-10 09:31:30.888799 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888807 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888815 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.888823 | orchestrator | 2025-02-10 09:31:30.888831 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-02-10 09:31:30.888838 | orchestrator | Monday 10 February 2025 09:27:06 +0000 (0:00:00.366) 0:10:00.207 ******* 2025-02-10 09:31:30.888847 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888854 | orchestrator | 2025-02-10 09:31:30.888862 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-10 09:31:30.888870 | orchestrator | Monday 10 February 2025 09:27:07 +0000 (0:00:00.276) 0:10:00.484 ******* 2025-02-10 09:31:30.888878 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888886 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.888906 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.888915 | orchestrator | 2025-02-10 09:31:30.888922 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-02-10 09:31:30.888931 | orchestrator | Monday 10 February 2025 09:27:07 +0000 (0:00:00.742) 0:10:01.226 ******* 2025-02-10 09:31:30.888938 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888947 | orchestrator | 2025-02-10 09:31:30.888955 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-02-10 09:31:30.888960 | orchestrator | Monday 10 February 2025 09:27:08 +0000 (0:00:00.331) 0:10:01.558 ******* 2025-02-10 09:31:30.888969 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888974 | orchestrator | 2025-02-10 09:31:30.888979 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-10 09:31:30.888984 | orchestrator | Monday 10 February 2025 09:27:08 +0000 (0:00:00.276) 0:10:01.834 ******* 2025-02-10 09:31:30.888990 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.888998 | orchestrator | 2025-02-10 09:31:30.889006 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-02-10 09:31:30.889013 | orchestrator | Monday 10 February 2025 09:27:08 +0000 (0:00:00.142) 0:10:01.977 ******* 2025-02-10 09:31:30.889046 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889056 | orchestrator | 2025-02-10 09:31:30.889064 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-02-10 09:31:30.889072 | orchestrator | Monday 10 February 2025 09:27:08 +0000 (0:00:00.282) 0:10:02.259 ******* 2025-02-10 09:31:30.889080 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889088 | orchestrator | 2025-02-10 09:31:30.889096 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-02-10 09:31:30.889103 | orchestrator | Monday 10 February 2025 09:27:09 +0000 (0:00:00.251) 0:10:02.511 ******* 2025-02-10 09:31:30.889111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.889119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.889127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.889136 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889144 | orchestrator | 2025-02-10 09:31:30.889151 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-02-10 09:31:30.889159 | orchestrator | Monday 10 February 2025 09:27:09 +0000 (0:00:00.496) 0:10:03.008 ******* 2025-02-10 09:31:30.889167 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889175 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.889183 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.889191 | orchestrator | 2025-02-10 09:31:30.889199 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-02-10 09:31:30.889207 | orchestrator | Monday 10 February 2025 09:27:10 +0000 (0:00:00.366) 0:10:03.374 ******* 2025-02-10 09:31:30.889215 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889223 | orchestrator | 2025-02-10 09:31:30.889231 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-02-10 09:31:30.889239 | orchestrator | Monday 10 February 2025 09:27:10 +0000 (0:00:00.700) 0:10:04.074 ******* 2025-02-10 09:31:30.889247 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889255 | orchestrator | 2025-02-10 09:31:30.889262 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.889271 | orchestrator | Monday 10 February 2025 09:27:10 +0000 (0:00:00.251) 0:10:04.326 ******* 2025-02-10 09:31:30.889278 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.889286 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.889294 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.889302 | orchestrator | 2025-02-10 09:31:30.889314 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-02-10 09:31:30.889322 | orchestrator | 2025-02-10 09:31:30.889330 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.889338 | orchestrator | Monday 10 February 2025 09:27:14 +0000 (0:00:03.203) 0:10:07.529 ******* 2025-02-10 09:31:30.889346 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.889356 | orchestrator | 2025-02-10 09:31:30.889364 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.889372 | orchestrator | Monday 10 February 2025 09:27:15 +0000 (0:00:01.535) 0:10:09.065 ******* 2025-02-10 09:31:30.889380 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889392 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.889400 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.889408 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.889417 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.889424 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.889432 | orchestrator | 2025-02-10 09:31:30.889440 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.889448 | orchestrator | Monday 10 February 2025 09:27:16 +0000 (0:00:01.116) 0:10:10.182 ******* 2025-02-10 09:31:30.889456 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.889464 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.889472 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.889480 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.889488 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.889496 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.889504 | orchestrator | 2025-02-10 09:31:30.889512 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.889520 | orchestrator | Monday 10 February 2025 09:27:17 +0000 (0:00:01.040) 0:10:11.222 ******* 2025-02-10 09:31:30.889528 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.889536 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.889544 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.889552 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.889560 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.889567 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.889575 | orchestrator | 2025-02-10 09:31:30.889583 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.889591 | orchestrator | Monday 10 February 2025 09:27:18 +0000 (0:00:00.732) 0:10:11.954 ******* 2025-02-10 09:31:30.889598 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.889606 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.889614 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.889622 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.889630 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.889638 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.889646 | orchestrator | 2025-02-10 09:31:30.889654 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.889662 | orchestrator | Monday 10 February 2025 09:27:19 +0000 (0:00:01.222) 0:10:13.177 ******* 2025-02-10 09:31:30.889670 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889678 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.889689 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.889697 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.889705 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.889713 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.889721 | orchestrator | 2025-02-10 09:31:30.889729 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.889737 | orchestrator | Monday 10 February 2025 09:27:21 +0000 (0:00:01.317) 0:10:14.494 ******* 2025-02-10 09:31:30.889750 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889779 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.889788 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.889797 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.889804 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.889812 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.889820 | orchestrator | 2025-02-10 09:31:30.889828 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.889836 | orchestrator | Monday 10 February 2025 09:27:22 +0000 (0:00:01.027) 0:10:15.522 ******* 2025-02-10 09:31:30.889844 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889852 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.889860 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.889868 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.889876 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.889892 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.889929 | orchestrator | 2025-02-10 09:31:30.889938 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.889946 | orchestrator | Monday 10 February 2025 09:27:22 +0000 (0:00:00.633) 0:10:16.155 ******* 2025-02-10 09:31:30.889954 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.889962 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.889970 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.889978 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.889986 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.889993 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890001 | orchestrator | 2025-02-10 09:31:30.890009 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.890040 | orchestrator | Monday 10 February 2025 09:27:23 +0000 (0:00:01.010) 0:10:17.166 ******* 2025-02-10 09:31:30.890049 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890057 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890066 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890074 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890083 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890092 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890101 | orchestrator | 2025-02-10 09:31:30.890110 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.890118 | orchestrator | Monday 10 February 2025 09:27:24 +0000 (0:00:00.698) 0:10:17.865 ******* 2025-02-10 09:31:30.890127 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890136 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890144 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890153 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890161 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890170 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890178 | orchestrator | 2025-02-10 09:31:30.890187 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.890196 | orchestrator | Monday 10 February 2025 09:27:25 +0000 (0:00:01.110) 0:10:18.975 ******* 2025-02-10 09:31:30.890205 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.890214 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.890222 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.890231 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.890239 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.890248 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.890256 | orchestrator | 2025-02-10 09:31:30.890265 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.890273 | orchestrator | Monday 10 February 2025 09:27:26 +0000 (0:00:01.072) 0:10:20.048 ******* 2025-02-10 09:31:30.890282 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890291 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890299 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890308 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890316 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890325 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890333 | orchestrator | 2025-02-10 09:31:30.890342 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.890350 | orchestrator | Monday 10 February 2025 09:27:27 +0000 (0:00:00.982) 0:10:21.030 ******* 2025-02-10 09:31:30.890358 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890367 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890376 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890384 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.890393 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.890406 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.890414 | orchestrator | 2025-02-10 09:31:30.890423 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.890436 | orchestrator | Monday 10 February 2025 09:27:28 +0000 (0:00:00.774) 0:10:21.805 ******* 2025-02-10 09:31:30.890445 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.890453 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.890461 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.890469 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890478 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890486 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890495 | orchestrator | 2025-02-10 09:31:30.890507 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.890516 | orchestrator | Monday 10 February 2025 09:27:29 +0000 (0:00:01.045) 0:10:22.850 ******* 2025-02-10 09:31:30.890524 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.890549 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.890558 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.890567 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890576 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890584 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890593 | orchestrator | 2025-02-10 09:31:30.890601 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.890610 | orchestrator | Monday 10 February 2025 09:27:30 +0000 (0:00:00.708) 0:10:23.558 ******* 2025-02-10 09:31:30.890618 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.890627 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.890635 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.890644 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890652 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890661 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890669 | orchestrator | 2025-02-10 09:31:30.890678 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.890708 | orchestrator | Monday 10 February 2025 09:27:31 +0000 (0:00:01.132) 0:10:24.691 ******* 2025-02-10 09:31:30.890717 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890726 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890734 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890742 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890750 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890758 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890765 | orchestrator | 2025-02-10 09:31:30.890773 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.890781 | orchestrator | Monday 10 February 2025 09:27:32 +0000 (0:00:00.689) 0:10:25.380 ******* 2025-02-10 09:31:30.890789 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890796 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890804 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890813 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.890821 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.890829 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.890837 | orchestrator | 2025-02-10 09:31:30.890845 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.890853 | orchestrator | Monday 10 February 2025 09:27:33 +0000 (0:00:01.223) 0:10:26.603 ******* 2025-02-10 09:31:30.890861 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.890868 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.890876 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.890884 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.890892 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.890912 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.890920 | orchestrator | 2025-02-10 09:31:30.890928 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.890936 | orchestrator | Monday 10 February 2025 09:27:34 +0000 (0:00:00.814) 0:10:27.418 ******* 2025-02-10 09:31:30.890944 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.890952 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.890959 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.890972 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.890980 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.890988 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.890996 | orchestrator | 2025-02-10 09:31:30.891004 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.891012 | orchestrator | Monday 10 February 2025 09:27:35 +0000 (0:00:01.071) 0:10:28.490 ******* 2025-02-10 09:31:30.891020 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891028 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891036 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891043 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891051 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891059 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891066 | orchestrator | 2025-02-10 09:31:30.891074 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.891082 | orchestrator | Monday 10 February 2025 09:27:35 +0000 (0:00:00.774) 0:10:29.264 ******* 2025-02-10 09:31:30.891090 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891098 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891106 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891117 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891125 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891133 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891141 | orchestrator | 2025-02-10 09:31:30.891149 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.891157 | orchestrator | Monday 10 February 2025 09:27:37 +0000 (0:00:01.150) 0:10:30.414 ******* 2025-02-10 09:31:30.891165 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891173 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891181 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891189 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891197 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891205 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891213 | orchestrator | 2025-02-10 09:31:30.891220 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.891227 | orchestrator | Monday 10 February 2025 09:27:37 +0000 (0:00:00.828) 0:10:31.243 ******* 2025-02-10 09:31:30.891234 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891242 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891249 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891258 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891266 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891274 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891282 | orchestrator | 2025-02-10 09:31:30.891290 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.891298 | orchestrator | Monday 10 February 2025 09:27:38 +0000 (0:00:01.044) 0:10:32.287 ******* 2025-02-10 09:31:30.891305 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891313 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891321 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891329 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891337 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891345 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891353 | orchestrator | 2025-02-10 09:31:30.891361 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.891369 | orchestrator | Monday 10 February 2025 09:27:39 +0000 (0:00:00.782) 0:10:33.069 ******* 2025-02-10 09:31:30.891376 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891384 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891392 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891400 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891408 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891416 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891429 | orchestrator | 2025-02-10 09:31:30.891437 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.891445 | orchestrator | Monday 10 February 2025 09:27:40 +0000 (0:00:01.027) 0:10:34.097 ******* 2025-02-10 09:31:30.891453 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891461 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891468 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891477 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891484 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891513 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891522 | orchestrator | 2025-02-10 09:31:30.891530 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.891538 | orchestrator | Monday 10 February 2025 09:27:41 +0000 (0:00:00.786) 0:10:34.883 ******* 2025-02-10 09:31:30.891546 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891553 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891561 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891569 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891577 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891585 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891593 | orchestrator | 2025-02-10 09:31:30.891600 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.891608 | orchestrator | Monday 10 February 2025 09:27:42 +0000 (0:00:01.021) 0:10:35.904 ******* 2025-02-10 09:31:30.891616 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891624 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891632 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891640 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891647 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891656 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891664 | orchestrator | 2025-02-10 09:31:30.891672 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.891680 | orchestrator | Monday 10 February 2025 09:27:43 +0000 (0:00:00.751) 0:10:36.655 ******* 2025-02-10 09:31:30.891688 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891695 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891703 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891711 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891719 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891731 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891738 | orchestrator | 2025-02-10 09:31:30.891746 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.891754 | orchestrator | Monday 10 February 2025 09:27:44 +0000 (0:00:01.170) 0:10:37.825 ******* 2025-02-10 09:31:30.891764 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891772 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891777 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891782 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891786 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891791 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891796 | orchestrator | 2025-02-10 09:31:30.891802 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.891810 | orchestrator | Monday 10 February 2025 09:27:45 +0000 (0:00:00.837) 0:10:38.663 ******* 2025-02-10 09:31:30.891818 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891825 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891832 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891839 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.891846 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.891853 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.891861 | orchestrator | 2025-02-10 09:31:30.891873 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.891886 | orchestrator | Monday 10 February 2025 09:27:46 +0000 (0:00:01.000) 0:10:39.664 ******* 2025-02-10 09:31:30.891906 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.891914 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.891922 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.891931 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.891938 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.891946 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.891954 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.891962 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.891970 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.891979 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-10 09:31:30.891986 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.891994 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.892002 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-10 09:31:30.892010 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892018 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892026 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.892034 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-10 09:31:30.892042 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892050 | orchestrator | 2025-02-10 09:31:30.892057 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.892066 | orchestrator | Monday 10 February 2025 09:27:47 +0000 (0:00:01.147) 0:10:40.811 ******* 2025-02-10 09:31:30.892073 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:31:30.892081 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:31:30.892089 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892097 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:31:30.892106 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:31:30.892114 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892122 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:31:30.892130 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:31:30.892137 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892146 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-10 09:31:30.892152 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-10 09:31:30.892160 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892168 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-10 09:31:30.892175 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-10 09:31:30.892183 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892216 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-10 09:31:30.892226 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-10 09:31:30.892234 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892242 | orchestrator | 2025-02-10 09:31:30.892250 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.892257 | orchestrator | Monday 10 February 2025 09:27:48 +0000 (0:00:01.238) 0:10:42.050 ******* 2025-02-10 09:31:30.892265 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892273 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892281 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892289 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892297 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892304 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892312 | orchestrator | 2025-02-10 09:31:30.892319 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.892332 | orchestrator | Monday 10 February 2025 09:27:49 +0000 (0:00:01.028) 0:10:43.078 ******* 2025-02-10 09:31:30.892341 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892349 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892357 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892364 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892372 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892380 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892388 | orchestrator | 2025-02-10 09:31:30.892396 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.892405 | orchestrator | Monday 10 February 2025 09:27:51 +0000 (0:00:01.506) 0:10:44.585 ******* 2025-02-10 09:31:30.892413 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892421 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892429 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892436 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892444 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892452 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892460 | orchestrator | 2025-02-10 09:31:30.892468 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.892476 | orchestrator | Monday 10 February 2025 09:27:52 +0000 (0:00:00.853) 0:10:45.439 ******* 2025-02-10 09:31:30.892484 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892492 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892500 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892508 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892515 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892523 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892531 | orchestrator | 2025-02-10 09:31:30.892539 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.892548 | orchestrator | Monday 10 February 2025 09:27:53 +0000 (0:00:01.119) 0:10:46.559 ******* 2025-02-10 09:31:30.892554 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892559 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892567 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892572 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892577 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892581 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892586 | orchestrator | 2025-02-10 09:31:30.892591 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.892596 | orchestrator | Monday 10 February 2025 09:27:54 +0000 (0:00:01.046) 0:10:47.605 ******* 2025-02-10 09:31:30.892600 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892605 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892610 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892615 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892620 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892624 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892629 | orchestrator | 2025-02-10 09:31:30.892634 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.892639 | orchestrator | Monday 10 February 2025 09:27:55 +0000 (0:00:01.672) 0:10:49.278 ******* 2025-02-10 09:31:30.892644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.892648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.892653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.892658 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892663 | orchestrator | 2025-02-10 09:31:30.892668 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.892673 | orchestrator | Monday 10 February 2025 09:27:56 +0000 (0:00:00.573) 0:10:49.851 ******* 2025-02-10 09:31:30.892677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.892686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.892691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.892696 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892704 | orchestrator | 2025-02-10 09:31:30.892712 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.892719 | orchestrator | Monday 10 February 2025 09:27:57 +0000 (0:00:00.602) 0:10:50.453 ******* 2025-02-10 09:31:30.892727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.892734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.892742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.892750 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892757 | orchestrator | 2025-02-10 09:31:30.892762 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.892767 | orchestrator | Monday 10 February 2025 09:27:57 +0000 (0:00:00.831) 0:10:51.285 ******* 2025-02-10 09:31:30.892772 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892777 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892781 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892786 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892811 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892817 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892822 | orchestrator | 2025-02-10 09:31:30.892827 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.892832 | orchestrator | Monday 10 February 2025 09:27:59 +0000 (0:00:01.345) 0:10:52.630 ******* 2025-02-10 09:31:30.892836 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.892841 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892846 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.892851 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892856 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.892861 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.892865 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892870 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892875 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.892880 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892885 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.892889 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892924 | orchestrator | 2025-02-10 09:31:30.892930 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.892935 | orchestrator | Monday 10 February 2025 09:28:00 +0000 (0:00:01.520) 0:10:54.150 ******* 2025-02-10 09:31:30.892940 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892945 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892950 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892955 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.892960 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.892964 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.892969 | orchestrator | 2025-02-10 09:31:30.892974 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.892979 | orchestrator | Monday 10 February 2025 09:28:02 +0000 (0:00:01.435) 0:10:55.586 ******* 2025-02-10 09:31:30.892984 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.892989 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.892993 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.892998 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893003 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893008 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893013 | orchestrator | 2025-02-10 09:31:30.893017 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.893026 | orchestrator | Monday 10 February 2025 09:28:03 +0000 (0:00:00.846) 0:10:56.432 ******* 2025-02-10 09:31:30.893031 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.893036 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893041 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.893046 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893051 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.893055 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893060 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-10 09:31:30.893065 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893070 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-10 09:31:30.893074 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893079 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-10 09:31:30.893084 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893089 | orchestrator | 2025-02-10 09:31:30.893094 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.893105 | orchestrator | Monday 10 February 2025 09:28:04 +0000 (0:00:01.523) 0:10:57.956 ******* 2025-02-10 09:31:30.893110 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.893114 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893120 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.893124 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893130 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.893134 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893139 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893144 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893149 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893153 | orchestrator | 2025-02-10 09:31:30.893158 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.893163 | orchestrator | Monday 10 February 2025 09:28:05 +0000 (0:00:00.895) 0:10:58.851 ******* 2025-02-10 09:31:30.893168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.893173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.893177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.893182 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.893192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.893197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.893201 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.893218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.893223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.893227 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-10 09:31:30.893243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-10 09:31:30.893252 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-10 09:31:30.893263 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893272 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-10 09:31:30.893281 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-10 09:31:30.893288 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-10 09:31:30.893297 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893301 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-10 09:31:30.893306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-10 09:31:30.893311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-10 09:31:30.893316 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893321 | orchestrator | 2025-02-10 09:31:30.893325 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.893330 | orchestrator | Monday 10 February 2025 09:28:07 +0000 (0:00:01.713) 0:11:00.565 ******* 2025-02-10 09:31:30.893335 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893340 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893345 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893349 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893354 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893359 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893364 | orchestrator | 2025-02-10 09:31:30.893369 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.893374 | orchestrator | Monday 10 February 2025 09:28:08 +0000 (0:00:01.531) 0:11:02.096 ******* 2025-02-10 09:31:30.893378 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.893383 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893388 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.893393 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893398 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.893402 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893407 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893412 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893417 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893422 | orchestrator | 2025-02-10 09:31:30.893427 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.893431 | orchestrator | Monday 10 February 2025 09:28:10 +0000 (0:00:01.576) 0:11:03.672 ******* 2025-02-10 09:31:30.893436 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893441 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893446 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893451 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893456 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893460 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893465 | orchestrator | 2025-02-10 09:31:30.893470 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.893475 | orchestrator | Monday 10 February 2025 09:28:11 +0000 (0:00:01.604) 0:11:05.277 ******* 2025-02-10 09:31:30.893479 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.893484 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.893489 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.893494 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:31:30.893499 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:31:30.893503 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:31:30.893508 | orchestrator | 2025-02-10 09:31:30.893513 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-02-10 09:31:30.893518 | orchestrator | Monday 10 February 2025 09:28:13 +0000 (0:00:01.560) 0:11:06.837 ******* 2025-02-10 09:31:30.893523 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.893527 | orchestrator | 2025-02-10 09:31:30.893532 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-02-10 09:31:30.893537 | orchestrator | Monday 10 February 2025 09:28:17 +0000 (0:00:04.253) 0:11:11.091 ******* 2025-02-10 09:31:30.893542 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.893547 | orchestrator | 2025-02-10 09:31:30.893551 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-02-10 09:31:30.893559 | orchestrator | Monday 10 February 2025 09:28:19 +0000 (0:00:01.895) 0:11:12.987 ******* 2025-02-10 09:31:30.893564 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.893569 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.893574 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.893578 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.893583 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.893588 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.893593 | orchestrator | 2025-02-10 09:31:30.893597 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-02-10 09:31:30.893602 | orchestrator | Monday 10 February 2025 09:28:22 +0000 (0:00:02.428) 0:11:15.415 ******* 2025-02-10 09:31:30.893607 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.893612 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.893617 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.893621 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.893626 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.893631 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.893636 | orchestrator | 2025-02-10 09:31:30.893640 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-02-10 09:31:30.893645 | orchestrator | Monday 10 February 2025 09:28:23 +0000 (0:00:01.766) 0:11:17.182 ******* 2025-02-10 09:31:30.893650 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.893656 | orchestrator | 2025-02-10 09:31:30.893661 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-02-10 09:31:30.893666 | orchestrator | Monday 10 February 2025 09:28:25 +0000 (0:00:01.904) 0:11:19.086 ******* 2025-02-10 09:31:30.893670 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.893675 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.893680 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.893685 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.893697 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.893702 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.893707 | orchestrator | 2025-02-10 09:31:30.893712 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-02-10 09:31:30.893717 | orchestrator | Monday 10 February 2025 09:28:27 +0000 (0:00:01.960) 0:11:21.047 ******* 2025-02-10 09:31:30.893721 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.893726 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.893731 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.893736 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.893741 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.893745 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.893754 | orchestrator | 2025-02-10 09:31:30.893759 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-02-10 09:31:30.893764 | orchestrator | Monday 10 February 2025 09:28:32 +0000 (0:00:05.037) 0:11:26.084 ******* 2025-02-10 09:31:30.893769 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:31:30.893774 | orchestrator | 2025-02-10 09:31:30.893779 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-02-10 09:31:30.893784 | orchestrator | Monday 10 February 2025 09:28:34 +0000 (0:00:01.655) 0:11:27.740 ******* 2025-02-10 09:31:30.893792 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.893801 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.893809 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.893816 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.893824 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.893831 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.893838 | orchestrator | 2025-02-10 09:31:30.893846 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-02-10 09:31:30.893860 | orchestrator | Monday 10 February 2025 09:28:35 +0000 (0:00:00.815) 0:11:28.556 ******* 2025-02-10 09:31:30.893866 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.893872 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.893879 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.893887 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:31:30.893908 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:31:30.893917 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:31:30.893925 | orchestrator | 2025-02-10 09:31:30.893935 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-02-10 09:31:30.893944 | orchestrator | Monday 10 February 2025 09:28:38 +0000 (0:00:03.090) 0:11:31.646 ******* 2025-02-10 09:31:30.893949 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.893954 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.893959 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.893963 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:31:30.893968 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:31:30.893973 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:31:30.893978 | orchestrator | 2025-02-10 09:31:30.893982 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-02-10 09:31:30.893987 | orchestrator | 2025-02-10 09:31:30.893992 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.893997 | orchestrator | Monday 10 February 2025 09:28:41 +0000 (0:00:02.984) 0:11:34.631 ******* 2025-02-10 09:31:30.894002 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.894007 | orchestrator | 2025-02-10 09:31:30.894034 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.894039 | orchestrator | Monday 10 February 2025 09:28:41 +0000 (0:00:00.601) 0:11:35.232 ******* 2025-02-10 09:31:30.894044 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894049 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894054 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894059 | orchestrator | 2025-02-10 09:31:30.894063 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.894068 | orchestrator | Monday 10 February 2025 09:28:42 +0000 (0:00:00.771) 0:11:36.004 ******* 2025-02-10 09:31:30.894073 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894078 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894082 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894087 | orchestrator | 2025-02-10 09:31:30.894092 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.894097 | orchestrator | Monday 10 February 2025 09:28:43 +0000 (0:00:00.740) 0:11:36.745 ******* 2025-02-10 09:31:30.894102 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894106 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894114 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894119 | orchestrator | 2025-02-10 09:31:30.894124 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.894129 | orchestrator | Monday 10 February 2025 09:28:44 +0000 (0:00:00.750) 0:11:37.496 ******* 2025-02-10 09:31:30.894133 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894138 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894143 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894147 | orchestrator | 2025-02-10 09:31:30.894152 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.894157 | orchestrator | Monday 10 February 2025 09:28:45 +0000 (0:00:01.054) 0:11:38.551 ******* 2025-02-10 09:31:30.894162 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894167 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894172 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894176 | orchestrator | 2025-02-10 09:31:30.894181 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.894186 | orchestrator | Monday 10 February 2025 09:28:45 +0000 (0:00:00.388) 0:11:38.940 ******* 2025-02-10 09:31:30.894195 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894200 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894205 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894209 | orchestrator | 2025-02-10 09:31:30.894214 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.894219 | orchestrator | Monday 10 February 2025 09:28:45 +0000 (0:00:00.348) 0:11:39.288 ******* 2025-02-10 09:31:30.894224 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894229 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894238 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894243 | orchestrator | 2025-02-10 09:31:30.894248 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.894253 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:00.355) 0:11:39.644 ******* 2025-02-10 09:31:30.894259 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894268 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894276 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894283 | orchestrator | 2025-02-10 09:31:30.894291 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.894299 | orchestrator | Monday 10 February 2025 09:28:46 +0000 (0:00:00.644) 0:11:40.288 ******* 2025-02-10 09:31:30.894307 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894316 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894325 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894333 | orchestrator | 2025-02-10 09:31:30.894341 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.894350 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.383) 0:11:40.672 ******* 2025-02-10 09:31:30.894358 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894367 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894376 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894381 | orchestrator | 2025-02-10 09:31:30.894386 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.894391 | orchestrator | Monday 10 February 2025 09:28:47 +0000 (0:00:00.357) 0:11:41.030 ******* 2025-02-10 09:31:30.894395 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894400 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894409 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894414 | orchestrator | 2025-02-10 09:31:30.894418 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.894423 | orchestrator | Monday 10 February 2025 09:28:48 +0000 (0:00:00.784) 0:11:41.814 ******* 2025-02-10 09:31:30.894428 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894433 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894437 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894442 | orchestrator | 2025-02-10 09:31:30.894447 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.894452 | orchestrator | Monday 10 February 2025 09:28:49 +0000 (0:00:00.694) 0:11:42.509 ******* 2025-02-10 09:31:30.894457 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894461 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894466 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894471 | orchestrator | 2025-02-10 09:31:30.894476 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.894483 | orchestrator | Monday 10 February 2025 09:28:49 +0000 (0:00:00.352) 0:11:42.861 ******* 2025-02-10 09:31:30.894488 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894493 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894498 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894502 | orchestrator | 2025-02-10 09:31:30.894507 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.894512 | orchestrator | Monday 10 February 2025 09:28:49 +0000 (0:00:00.369) 0:11:43.231 ******* 2025-02-10 09:31:30.894517 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894525 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894530 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894534 | orchestrator | 2025-02-10 09:31:30.894539 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.894544 | orchestrator | Monday 10 February 2025 09:28:50 +0000 (0:00:00.431) 0:11:43.663 ******* 2025-02-10 09:31:30.894549 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894554 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894558 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894563 | orchestrator | 2025-02-10 09:31:30.894568 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.894573 | orchestrator | Monday 10 February 2025 09:28:51 +0000 (0:00:00.689) 0:11:44.352 ******* 2025-02-10 09:31:30.894578 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894582 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894587 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894592 | orchestrator | 2025-02-10 09:31:30.894597 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.894601 | orchestrator | Monday 10 February 2025 09:28:51 +0000 (0:00:00.392) 0:11:44.745 ******* 2025-02-10 09:31:30.894606 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894611 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894616 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894620 | orchestrator | 2025-02-10 09:31:30.894625 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.894630 | orchestrator | Monday 10 February 2025 09:28:51 +0000 (0:00:00.336) 0:11:45.081 ******* 2025-02-10 09:31:30.894635 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894639 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894644 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894649 | orchestrator | 2025-02-10 09:31:30.894654 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.894659 | orchestrator | Monday 10 February 2025 09:28:52 +0000 (0:00:00.364) 0:11:45.446 ******* 2025-02-10 09:31:30.894663 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.894668 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.894673 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.894678 | orchestrator | 2025-02-10 09:31:30.894682 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.894687 | orchestrator | Monday 10 February 2025 09:28:52 +0000 (0:00:00.693) 0:11:46.140 ******* 2025-02-10 09:31:30.894692 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894697 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894702 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894706 | orchestrator | 2025-02-10 09:31:30.894711 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.894716 | orchestrator | Monday 10 February 2025 09:28:53 +0000 (0:00:00.359) 0:11:46.499 ******* 2025-02-10 09:31:30.894721 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894725 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894733 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894738 | orchestrator | 2025-02-10 09:31:30.894743 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.894748 | orchestrator | Monday 10 February 2025 09:28:53 +0000 (0:00:00.355) 0:11:46.854 ******* 2025-02-10 09:31:30.894753 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894758 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894762 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894767 | orchestrator | 2025-02-10 09:31:30.894772 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.894777 | orchestrator | Monday 10 February 2025 09:28:53 +0000 (0:00:00.385) 0:11:47.240 ******* 2025-02-10 09:31:30.894781 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894786 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894798 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894803 | orchestrator | 2025-02-10 09:31:30.894808 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.894812 | orchestrator | Monday 10 February 2025 09:28:54 +0000 (0:00:00.660) 0:11:47.900 ******* 2025-02-10 09:31:30.894817 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894822 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894827 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894832 | orchestrator | 2025-02-10 09:31:30.894836 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.894841 | orchestrator | Monday 10 February 2025 09:28:54 +0000 (0:00:00.400) 0:11:48.301 ******* 2025-02-10 09:31:30.894846 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894851 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894855 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894860 | orchestrator | 2025-02-10 09:31:30.894865 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.894870 | orchestrator | Monday 10 February 2025 09:28:55 +0000 (0:00:00.366) 0:11:48.667 ******* 2025-02-10 09:31:30.894875 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894879 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894884 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894889 | orchestrator | 2025-02-10 09:31:30.894906 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.894912 | orchestrator | Monday 10 February 2025 09:28:55 +0000 (0:00:00.380) 0:11:49.048 ******* 2025-02-10 09:31:30.894917 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894922 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894930 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894934 | orchestrator | 2025-02-10 09:31:30.894939 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.894944 | orchestrator | Monday 10 February 2025 09:28:56 +0000 (0:00:00.743) 0:11:49.792 ******* 2025-02-10 09:31:30.894954 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894959 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894964 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894969 | orchestrator | 2025-02-10 09:31:30.894974 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.894978 | orchestrator | Monday 10 February 2025 09:28:56 +0000 (0:00:00.386) 0:11:50.178 ******* 2025-02-10 09:31:30.894983 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.894988 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.894993 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.894998 | orchestrator | 2025-02-10 09:31:30.895002 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.895007 | orchestrator | Monday 10 February 2025 09:28:57 +0000 (0:00:00.375) 0:11:50.554 ******* 2025-02-10 09:31:30.895012 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895017 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895022 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895026 | orchestrator | 2025-02-10 09:31:30.895031 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.895039 | orchestrator | Monday 10 February 2025 09:28:57 +0000 (0:00:00.388) 0:11:50.942 ******* 2025-02-10 09:31:30.895044 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895049 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895053 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895058 | orchestrator | 2025-02-10 09:31:30.895063 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.895068 | orchestrator | Monday 10 February 2025 09:28:57 +0000 (0:00:00.347) 0:11:51.290 ******* 2025-02-10 09:31:30.895073 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.895082 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.895087 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895091 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.895096 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.895101 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895106 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.895111 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.895116 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895120 | orchestrator | 2025-02-10 09:31:30.895125 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.895130 | orchestrator | Monday 10 February 2025 09:28:58 +0000 (0:00:00.758) 0:11:52.049 ******* 2025-02-10 09:31:30.895135 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:31:30.895139 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:31:30.895144 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895149 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:31:30.895154 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:31:30.895158 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895163 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:31:30.895168 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:31:30.895175 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895180 | orchestrator | 2025-02-10 09:31:30.895185 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.895190 | orchestrator | Monday 10 February 2025 09:28:59 +0000 (0:00:00.457) 0:11:52.506 ******* 2025-02-10 09:31:30.895195 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895200 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895205 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895209 | orchestrator | 2025-02-10 09:31:30.895214 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.895219 | orchestrator | Monday 10 February 2025 09:28:59 +0000 (0:00:00.469) 0:11:52.976 ******* 2025-02-10 09:31:30.895224 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895228 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895233 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895238 | orchestrator | 2025-02-10 09:31:30.895243 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.895248 | orchestrator | Monday 10 February 2025 09:29:00 +0000 (0:00:00.372) 0:11:53.348 ******* 2025-02-10 09:31:30.895252 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895257 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895262 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895267 | orchestrator | 2025-02-10 09:31:30.895272 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.895277 | orchestrator | Monday 10 February 2025 09:29:00 +0000 (0:00:00.839) 0:11:54.188 ******* 2025-02-10 09:31:30.895281 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895286 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895291 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895296 | orchestrator | 2025-02-10 09:31:30.895300 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.895305 | orchestrator | Monday 10 February 2025 09:29:01 +0000 (0:00:00.447) 0:11:54.635 ******* 2025-02-10 09:31:30.895310 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895315 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895319 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895324 | orchestrator | 2025-02-10 09:31:30.895329 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.895338 | orchestrator | Monday 10 February 2025 09:29:01 +0000 (0:00:00.427) 0:11:55.063 ******* 2025-02-10 09:31:30.895342 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895347 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895352 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895357 | orchestrator | 2025-02-10 09:31:30.895361 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.895366 | orchestrator | Monday 10 February 2025 09:29:02 +0000 (0:00:00.397) 0:11:55.461 ******* 2025-02-10 09:31:30.895371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.895376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.895381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.895386 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895390 | orchestrator | 2025-02-10 09:31:30.895396 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.895404 | orchestrator | Monday 10 February 2025 09:29:03 +0000 (0:00:00.888) 0:11:56.349 ******* 2025-02-10 09:31:30.895412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.895419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.895428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.895435 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895442 | orchestrator | 2025-02-10 09:31:30.895450 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.895457 | orchestrator | Monday 10 February 2025 09:29:03 +0000 (0:00:00.537) 0:11:56.886 ******* 2025-02-10 09:31:30.895465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.895472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.895480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.895487 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895495 | orchestrator | 2025-02-10 09:31:30.895503 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.895510 | orchestrator | Monday 10 February 2025 09:29:04 +0000 (0:00:00.593) 0:11:57.480 ******* 2025-02-10 09:31:30.895518 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895526 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895535 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895541 | orchestrator | 2025-02-10 09:31:30.895545 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.895550 | orchestrator | Monday 10 February 2025 09:29:04 +0000 (0:00:00.429) 0:11:57.910 ******* 2025-02-10 09:31:30.895555 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.895560 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895565 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.895570 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895575 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.895579 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895584 | orchestrator | 2025-02-10 09:31:30.895589 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.895594 | orchestrator | Monday 10 February 2025 09:29:05 +0000 (0:00:00.587) 0:11:58.498 ******* 2025-02-10 09:31:30.895599 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895603 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895608 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895613 | orchestrator | 2025-02-10 09:31:30.895618 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.895622 | orchestrator | Monday 10 February 2025 09:29:05 +0000 (0:00:00.832) 0:11:59.330 ******* 2025-02-10 09:31:30.895631 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895636 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895648 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895653 | orchestrator | 2025-02-10 09:31:30.895658 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.895663 | orchestrator | Monday 10 February 2025 09:29:06 +0000 (0:00:00.422) 0:11:59.753 ******* 2025-02-10 09:31:30.895668 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.895675 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895680 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.895685 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895689 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.895694 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895699 | orchestrator | 2025-02-10 09:31:30.895704 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.895709 | orchestrator | Monday 10 February 2025 09:29:07 +0000 (0:00:00.678) 0:12:00.432 ******* 2025-02-10 09:31:30.895713 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.895718 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895723 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.895728 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895733 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.895738 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895743 | orchestrator | 2025-02-10 09:31:30.895747 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.895752 | orchestrator | Monday 10 February 2025 09:29:07 +0000 (0:00:00.429) 0:12:00.862 ******* 2025-02-10 09:31:30.895757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.895762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.895766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.895771 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.895776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.895781 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.895786 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895791 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.895800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.895807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.895812 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895817 | orchestrator | 2025-02-10 09:31:30.895822 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.895827 | orchestrator | Monday 10 February 2025 09:29:08 +0000 (0:00:01.109) 0:12:01.971 ******* 2025-02-10 09:31:30.895831 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895836 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895841 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895846 | orchestrator | 2025-02-10 09:31:30.895851 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.895855 | orchestrator | Monday 10 February 2025 09:29:09 +0000 (0:00:00.665) 0:12:02.636 ******* 2025-02-10 09:31:30.895860 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.895865 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895870 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.895875 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895879 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.895888 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895893 | orchestrator | 2025-02-10 09:31:30.895911 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.895916 | orchestrator | Monday 10 February 2025 09:29:10 +0000 (0:00:01.038) 0:12:03.675 ******* 2025-02-10 09:31:30.895921 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895925 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895930 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895935 | orchestrator | 2025-02-10 09:31:30.895940 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.895945 | orchestrator | Monday 10 February 2025 09:29:10 +0000 (0:00:00.654) 0:12:04.330 ******* 2025-02-10 09:31:30.895949 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.895954 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895959 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895964 | orchestrator | 2025-02-10 09:31:30.895969 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-02-10 09:31:30.895973 | orchestrator | Monday 10 February 2025 09:29:11 +0000 (0:00:00.929) 0:12:05.260 ******* 2025-02-10 09:31:30.895978 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.895983 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.895988 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-02-10 09:31:30.895993 | orchestrator | 2025-02-10 09:31:30.895997 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-02-10 09:31:30.896002 | orchestrator | Monday 10 February 2025 09:29:12 +0000 (0:00:00.518) 0:12:05.778 ******* 2025-02-10 09:31:30.896007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.896012 | orchestrator | 2025-02-10 09:31:30.896017 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-02-10 09:31:30.896022 | orchestrator | Monday 10 February 2025 09:29:14 +0000 (0:00:01.929) 0:12:07.708 ******* 2025-02-10 09:31:30.896031 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-02-10 09:31:30.896038 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.896043 | orchestrator | 2025-02-10 09:31:30.896048 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-02-10 09:31:30.896052 | orchestrator | Monday 10 February 2025 09:29:15 +0000 (0:00:00.649) 0:12:08.357 ******* 2025-02-10 09:31:30.896059 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:31:30.896065 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:31:30.896070 | orchestrator | 2025-02-10 09:31:30.896075 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-02-10 09:31:30.896080 | orchestrator | Monday 10 February 2025 09:29:22 +0000 (0:00:07.167) 0:12:15.525 ******* 2025-02-10 09:31:30.896085 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:31:30.896089 | orchestrator | 2025-02-10 09:31:30.896094 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-02-10 09:31:30.896099 | orchestrator | Monday 10 February 2025 09:29:25 +0000 (0:00:03.239) 0:12:18.764 ******* 2025-02-10 09:31:30.896104 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.896109 | orchestrator | 2025-02-10 09:31:30.896114 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-02-10 09:31:30.896121 | orchestrator | Monday 10 February 2025 09:29:26 +0000 (0:00:00.619) 0:12:19.384 ******* 2025-02-10 09:31:30.896126 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-10 09:31:30.896131 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-10 09:31:30.896136 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-02-10 09:31:30.896141 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-02-10 09:31:30.896146 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-10 09:31:30.896151 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-02-10 09:31:30.896155 | orchestrator | 2025-02-10 09:31:30.896160 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-02-10 09:31:30.896165 | orchestrator | Monday 10 February 2025 09:29:27 +0000 (0:00:01.389) 0:12:20.773 ******* 2025-02-10 09:31:30.896170 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:31:30.896178 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.896186 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:31:30.896191 | orchestrator | 2025-02-10 09:31:30.896196 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-02-10 09:31:30.896201 | orchestrator | Monday 10 February 2025 09:29:29 +0000 (0:00:01.979) 0:12:22.752 ******* 2025-02-10 09:31:30.896205 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:31:30.896210 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.896215 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896220 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:31:30.896225 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.896230 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896234 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:31:30.896239 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.896244 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896249 | orchestrator | 2025-02-10 09:31:30.896254 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-02-10 09:31:30.896258 | orchestrator | Monday 10 February 2025 09:29:30 +0000 (0:00:01.236) 0:12:23.989 ******* 2025-02-10 09:31:30.896263 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.896269 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.896276 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.896283 | orchestrator | 2025-02-10 09:31:30.896291 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-02-10 09:31:30.896295 | orchestrator | Monday 10 February 2025 09:29:31 +0000 (0:00:00.395) 0:12:24.384 ******* 2025-02-10 09:31:30.896300 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.896305 | orchestrator | 2025-02-10 09:31:30.896310 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-02-10 09:31:30.896315 | orchestrator | Monday 10 February 2025 09:29:31 +0000 (0:00:00.882) 0:12:25.267 ******* 2025-02-10 09:31:30.896320 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.896324 | orchestrator | 2025-02-10 09:31:30.896329 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-02-10 09:31:30.896334 | orchestrator | Monday 10 February 2025 09:29:32 +0000 (0:00:00.594) 0:12:25.861 ******* 2025-02-10 09:31:30.896342 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896347 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896352 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896357 | orchestrator | 2025-02-10 09:31:30.896362 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-02-10 09:31:30.896404 | orchestrator | Monday 10 February 2025 09:29:34 +0000 (0:00:01.592) 0:12:27.453 ******* 2025-02-10 09:31:30.896409 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896414 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896418 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896423 | orchestrator | 2025-02-10 09:31:30.896428 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-02-10 09:31:30.896433 | orchestrator | Monday 10 February 2025 09:29:35 +0000 (0:00:01.211) 0:12:28.665 ******* 2025-02-10 09:31:30.896437 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896442 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896447 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896452 | orchestrator | 2025-02-10 09:31:30.896457 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-02-10 09:31:30.896461 | orchestrator | Monday 10 February 2025 09:29:37 +0000 (0:00:01.819) 0:12:30.484 ******* 2025-02-10 09:31:30.896466 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896471 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896476 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896481 | orchestrator | 2025-02-10 09:31:30.896486 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-02-10 09:31:30.896490 | orchestrator | Monday 10 February 2025 09:29:39 +0000 (0:00:02.367) 0:12:32.852 ******* 2025-02-10 09:31:30.896495 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-02-10 09:31:30.896500 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-02-10 09:31:30.896505 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-02-10 09:31:30.896510 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.896515 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.896520 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.896525 | orchestrator | 2025-02-10 09:31:30.896530 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:31:30.896534 | orchestrator | Monday 10 February 2025 09:29:56 +0000 (0:00:17.323) 0:12:50.176 ******* 2025-02-10 09:31:30.896539 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896544 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896549 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896553 | orchestrator | 2025-02-10 09:31:30.896558 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-10 09:31:30.896564 | orchestrator | Monday 10 February 2025 09:29:57 +0000 (0:00:00.772) 0:12:50.949 ******* 2025-02-10 09:31:30.896572 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.896580 | orchestrator | 2025-02-10 09:31:30.896587 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-02-10 09:31:30.896595 | orchestrator | Monday 10 February 2025 09:29:58 +0000 (0:00:00.925) 0:12:51.875 ******* 2025-02-10 09:31:30.896603 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.896611 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.896619 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.896631 | orchestrator | 2025-02-10 09:31:30.896639 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-10 09:31:30.896649 | orchestrator | Monday 10 February 2025 09:29:58 +0000 (0:00:00.384) 0:12:52.260 ******* 2025-02-10 09:31:30.896654 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896661 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896669 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896676 | orchestrator | 2025-02-10 09:31:30.896684 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-02-10 09:31:30.896692 | orchestrator | Monday 10 February 2025 09:30:00 +0000 (0:00:01.268) 0:12:53.528 ******* 2025-02-10 09:31:30.896699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.896719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.896726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.896731 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.896736 | orchestrator | 2025-02-10 09:31:30.896743 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-10 09:31:30.896751 | orchestrator | Monday 10 February 2025 09:30:01 +0000 (0:00:01.116) 0:12:54.644 ******* 2025-02-10 09:31:30.896759 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.896767 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.896775 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.896783 | orchestrator | 2025-02-10 09:31:30.896791 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.896799 | orchestrator | Monday 10 February 2025 09:30:01 +0000 (0:00:00.699) 0:12:55.344 ******* 2025-02-10 09:31:30.896807 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.896814 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.896822 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.896830 | orchestrator | 2025-02-10 09:31:30.896838 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-10 09:31:30.896847 | orchestrator | 2025-02-10 09:31:30.896852 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-10 09:31:30.896857 | orchestrator | Monday 10 February 2025 09:30:04 +0000 (0:00:02.458) 0:12:57.802 ******* 2025-02-10 09:31:30.896862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.896867 | orchestrator | 2025-02-10 09:31:30.896871 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-10 09:31:30.896876 | orchestrator | Monday 10 February 2025 09:30:05 +0000 (0:00:00.985) 0:12:58.787 ******* 2025-02-10 09:31:30.896881 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.896890 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.896928 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.896934 | orchestrator | 2025-02-10 09:31:30.896939 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-10 09:31:30.896943 | orchestrator | Monday 10 February 2025 09:30:05 +0000 (0:00:00.387) 0:12:59.174 ******* 2025-02-10 09:31:30.896948 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.896953 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.896958 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.896963 | orchestrator | 2025-02-10 09:31:30.896968 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-10 09:31:30.896972 | orchestrator | Monday 10 February 2025 09:30:06 +0000 (0:00:00.826) 0:13:00.001 ******* 2025-02-10 09:31:30.896977 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.896982 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.896987 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.896992 | orchestrator | 2025-02-10 09:31:30.896997 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-10 09:31:30.897006 | orchestrator | Monday 10 February 2025 09:30:07 +0000 (0:00:00.831) 0:13:00.833 ******* 2025-02-10 09:31:30.897012 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.897020 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.897027 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.897035 | orchestrator | 2025-02-10 09:31:30.897043 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-10 09:31:30.897048 | orchestrator | Monday 10 February 2025 09:30:08 +0000 (0:00:01.305) 0:13:02.138 ******* 2025-02-10 09:31:30.897053 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897058 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897063 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897067 | orchestrator | 2025-02-10 09:31:30.897072 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-10 09:31:30.897077 | orchestrator | Monday 10 February 2025 09:30:09 +0000 (0:00:00.395) 0:13:02.534 ******* 2025-02-10 09:31:30.897086 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897091 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897096 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897101 | orchestrator | 2025-02-10 09:31:30.897105 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-10 09:31:30.897110 | orchestrator | Monday 10 February 2025 09:30:09 +0000 (0:00:00.327) 0:13:02.862 ******* 2025-02-10 09:31:30.897115 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897119 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897124 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897129 | orchestrator | 2025-02-10 09:31:30.897134 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-10 09:31:30.897138 | orchestrator | Monday 10 February 2025 09:30:09 +0000 (0:00:00.366) 0:13:03.228 ******* 2025-02-10 09:31:30.897143 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897148 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897153 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897158 | orchestrator | 2025-02-10 09:31:30.897162 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-10 09:31:30.897167 | orchestrator | Monday 10 February 2025 09:30:10 +0000 (0:00:00.725) 0:13:03.954 ******* 2025-02-10 09:31:30.897172 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897177 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897181 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897186 | orchestrator | 2025-02-10 09:31:30.897191 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-10 09:31:30.897196 | orchestrator | Monday 10 February 2025 09:30:11 +0000 (0:00:00.395) 0:13:04.350 ******* 2025-02-10 09:31:30.897201 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897205 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897210 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897215 | orchestrator | 2025-02-10 09:31:30.897220 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-10 09:31:30.897224 | orchestrator | Monday 10 February 2025 09:30:11 +0000 (0:00:00.355) 0:13:04.705 ******* 2025-02-10 09:31:30.897229 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.897234 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.897239 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.897244 | orchestrator | 2025-02-10 09:31:30.897248 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-10 09:31:30.897253 | orchestrator | Monday 10 February 2025 09:30:12 +0000 (0:00:00.749) 0:13:05.455 ******* 2025-02-10 09:31:30.897258 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897263 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897267 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897272 | orchestrator | 2025-02-10 09:31:30.897277 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-10 09:31:30.897289 | orchestrator | Monday 10 February 2025 09:30:12 +0000 (0:00:00.730) 0:13:06.186 ******* 2025-02-10 09:31:30.897297 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897304 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897312 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897319 | orchestrator | 2025-02-10 09:31:30.897327 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-10 09:31:30.897334 | orchestrator | Monday 10 February 2025 09:30:13 +0000 (0:00:00.510) 0:13:06.696 ******* 2025-02-10 09:31:30.897341 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.897352 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.897359 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.897366 | orchestrator | 2025-02-10 09:31:30.897373 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-10 09:31:30.897381 | orchestrator | Monday 10 February 2025 09:30:13 +0000 (0:00:00.450) 0:13:07.146 ******* 2025-02-10 09:31:30.897389 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.897402 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.897410 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.897418 | orchestrator | 2025-02-10 09:31:30.897425 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-10 09:31:30.897430 | orchestrator | Monday 10 February 2025 09:30:14 +0000 (0:00:00.509) 0:13:07.656 ******* 2025-02-10 09:31:30.897435 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.897440 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.897445 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.897450 | orchestrator | 2025-02-10 09:31:30.897458 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-10 09:31:30.897463 | orchestrator | Monday 10 February 2025 09:30:15 +0000 (0:00:00.765) 0:13:08.422 ******* 2025-02-10 09:31:30.897468 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897473 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897478 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897483 | orchestrator | 2025-02-10 09:31:30.897487 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-10 09:31:30.897492 | orchestrator | Monday 10 February 2025 09:30:15 +0000 (0:00:00.385) 0:13:08.807 ******* 2025-02-10 09:31:30.897497 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897502 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897506 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897511 | orchestrator | 2025-02-10 09:31:30.897516 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-10 09:31:30.897524 | orchestrator | Monday 10 February 2025 09:30:15 +0000 (0:00:00.337) 0:13:09.145 ******* 2025-02-10 09:31:30.897531 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897539 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897546 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897554 | orchestrator | 2025-02-10 09:31:30.897562 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-10 09:31:30.897570 | orchestrator | Monday 10 February 2025 09:30:16 +0000 (0:00:00.335) 0:13:09.481 ******* 2025-02-10 09:31:30.897578 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.897583 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.897588 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.897593 | orchestrator | 2025-02-10 09:31:30.897598 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-10 09:31:30.897603 | orchestrator | Monday 10 February 2025 09:30:16 +0000 (0:00:00.862) 0:13:10.343 ******* 2025-02-10 09:31:30.897608 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897612 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897617 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897622 | orchestrator | 2025-02-10 09:31:30.897627 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-10 09:31:30.897632 | orchestrator | Monday 10 February 2025 09:30:17 +0000 (0:00:00.408) 0:13:10.752 ******* 2025-02-10 09:31:30.897637 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897641 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897646 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897653 | orchestrator | 2025-02-10 09:31:30.897661 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-10 09:31:30.897669 | orchestrator | Monday 10 February 2025 09:30:17 +0000 (0:00:00.388) 0:13:11.141 ******* 2025-02-10 09:31:30.897677 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897684 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897693 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897700 | orchestrator | 2025-02-10 09:31:30.897708 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-10 09:31:30.897716 | orchestrator | Monday 10 February 2025 09:30:18 +0000 (0:00:00.387) 0:13:11.528 ******* 2025-02-10 09:31:30.897724 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897732 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897746 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897754 | orchestrator | 2025-02-10 09:31:30.897759 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-10 09:31:30.897764 | orchestrator | Monday 10 February 2025 09:30:18 +0000 (0:00:00.718) 0:13:12.247 ******* 2025-02-10 09:31:30.897769 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897774 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897778 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897783 | orchestrator | 2025-02-10 09:31:30.897788 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-10 09:31:30.897793 | orchestrator | Monday 10 February 2025 09:30:19 +0000 (0:00:00.441) 0:13:12.688 ******* 2025-02-10 09:31:30.897798 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897806 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897813 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897821 | orchestrator | 2025-02-10 09:31:30.897829 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-10 09:31:30.897837 | orchestrator | Monday 10 February 2025 09:30:19 +0000 (0:00:00.362) 0:13:13.051 ******* 2025-02-10 09:31:30.897845 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897853 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897861 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897869 | orchestrator | 2025-02-10 09:31:30.897877 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-10 09:31:30.897886 | orchestrator | Monday 10 February 2025 09:30:20 +0000 (0:00:00.394) 0:13:13.445 ******* 2025-02-10 09:31:30.897910 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897918 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897926 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897934 | orchestrator | 2025-02-10 09:31:30.897942 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-10 09:31:30.897950 | orchestrator | Monday 10 February 2025 09:30:20 +0000 (0:00:00.733) 0:13:14.179 ******* 2025-02-10 09:31:30.897958 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.897966 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.897974 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.897981 | orchestrator | 2025-02-10 09:31:30.897988 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-10 09:31:30.897997 | orchestrator | Monday 10 February 2025 09:30:21 +0000 (0:00:00.362) 0:13:14.541 ******* 2025-02-10 09:31:30.898005 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898034 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898043 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898051 | orchestrator | 2025-02-10 09:31:30.898059 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-10 09:31:30.898077 | orchestrator | Monday 10 February 2025 09:30:21 +0000 (0:00:00.460) 0:13:15.002 ******* 2025-02-10 09:31:30.898086 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898094 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898098 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898103 | orchestrator | 2025-02-10 09:31:30.898108 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-10 09:31:30.898113 | orchestrator | Monday 10 February 2025 09:30:22 +0000 (0:00:00.457) 0:13:15.459 ******* 2025-02-10 09:31:30.898117 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898122 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898127 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898132 | orchestrator | 2025-02-10 09:31:30.898137 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-10 09:31:30.898142 | orchestrator | Monday 10 February 2025 09:30:22 +0000 (0:00:00.663) 0:13:16.123 ******* 2025-02-10 09:31:30.898146 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.898156 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-10 09:31:30.898161 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898166 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.898171 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-10 09:31:30.898176 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898181 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.898186 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-10 09:31:30.898190 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898195 | orchestrator | 2025-02-10 09:31:30.898200 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-10 09:31:30.898205 | orchestrator | Monday 10 February 2025 09:30:23 +0000 (0:00:00.432) 0:13:16.556 ******* 2025-02-10 09:31:30.898211 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-10 09:31:30.898218 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-10 09:31:30.898226 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898233 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-10 09:31:30.898241 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-10 09:31:30.898249 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898255 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-10 09:31:30.898259 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-10 09:31:30.898264 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898269 | orchestrator | 2025-02-10 09:31:30.898274 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-10 09:31:30.898278 | orchestrator | Monday 10 February 2025 09:30:23 +0000 (0:00:00.416) 0:13:16.972 ******* 2025-02-10 09:31:30.898283 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898288 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898293 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898298 | orchestrator | 2025-02-10 09:31:30.898302 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-10 09:31:30.898307 | orchestrator | Monday 10 February 2025 09:30:24 +0000 (0:00:00.408) 0:13:17.380 ******* 2025-02-10 09:31:30.898312 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898317 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898322 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898326 | orchestrator | 2025-02-10 09:31:30.898331 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:31:30.898336 | orchestrator | Monday 10 February 2025 09:30:24 +0000 (0:00:00.723) 0:13:18.103 ******* 2025-02-10 09:31:30.898341 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898346 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898351 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898355 | orchestrator | 2025-02-10 09:31:30.898360 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:31:30.898365 | orchestrator | Monday 10 February 2025 09:30:25 +0000 (0:00:00.368) 0:13:18.472 ******* 2025-02-10 09:31:30.898370 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898375 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898379 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898384 | orchestrator | 2025-02-10 09:31:30.898389 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:31:30.898394 | orchestrator | Monday 10 February 2025 09:30:25 +0000 (0:00:00.417) 0:13:18.890 ******* 2025-02-10 09:31:30.898398 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898403 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898408 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898413 | orchestrator | 2025-02-10 09:31:30.898418 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:31:30.898426 | orchestrator | Monday 10 February 2025 09:30:25 +0000 (0:00:00.394) 0:13:19.284 ******* 2025-02-10 09:31:30.898431 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898436 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898441 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898446 | orchestrator | 2025-02-10 09:31:30.898450 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:31:30.898455 | orchestrator | Monday 10 February 2025 09:30:26 +0000 (0:00:00.745) 0:13:20.030 ******* 2025-02-10 09:31:30.898460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.898465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.898470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.898475 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898479 | orchestrator | 2025-02-10 09:31:30.898484 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:31:30.898489 | orchestrator | Monday 10 February 2025 09:30:27 +0000 (0:00:00.558) 0:13:20.588 ******* 2025-02-10 09:31:30.898494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.898499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.898506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.898511 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898516 | orchestrator | 2025-02-10 09:31:30.898521 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:31:30.898526 | orchestrator | Monday 10 February 2025 09:30:27 +0000 (0:00:00.524) 0:13:21.112 ******* 2025-02-10 09:31:30.898530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.898535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.898540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.898545 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898549 | orchestrator | 2025-02-10 09:31:30.898554 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.898559 | orchestrator | Monday 10 February 2025 09:30:28 +0000 (0:00:00.522) 0:13:21.635 ******* 2025-02-10 09:31:30.898564 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898568 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898573 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898578 | orchestrator | 2025-02-10 09:31:30.898583 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:31:30.898588 | orchestrator | Monday 10 February 2025 09:30:28 +0000 (0:00:00.361) 0:13:21.997 ******* 2025-02-10 09:31:30.898592 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.898597 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898602 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.898607 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898611 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.898616 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898621 | orchestrator | 2025-02-10 09:31:30.898626 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:31:30.898631 | orchestrator | Monday 10 February 2025 09:30:30 +0000 (0:00:01.376) 0:13:23.373 ******* 2025-02-10 09:31:30.898635 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898640 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898645 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898650 | orchestrator | 2025-02-10 09:31:30.898654 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:31:30.898659 | orchestrator | Monday 10 February 2025 09:30:30 +0000 (0:00:00.454) 0:13:23.827 ******* 2025-02-10 09:31:30.898664 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898669 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898677 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898682 | orchestrator | 2025-02-10 09:31:30.898690 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:31:30.898695 | orchestrator | Monday 10 February 2025 09:30:30 +0000 (0:00:00.376) 0:13:24.203 ******* 2025-02-10 09:31:30.898699 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:31:30.898704 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898709 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:31:30.898714 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898719 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:31:30.898723 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898728 | orchestrator | 2025-02-10 09:31:30.898733 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:31:30.898739 | orchestrator | Monday 10 February 2025 09:30:31 +0000 (0:00:00.524) 0:13:24.728 ******* 2025-02-10 09:31:30.898747 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.898755 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898763 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.898772 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898780 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:31:30.898788 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898793 | orchestrator | 2025-02-10 09:31:30.898798 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:31:30.898803 | orchestrator | Monday 10 February 2025 09:30:32 +0000 (0:00:00.731) 0:13:25.460 ******* 2025-02-10 09:31:30.898808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.898813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.898817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.898822 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898830 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:31:30.898836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:31:30.898844 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:31:30.898852 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898860 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:31:30.898868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:31:30.898876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:31:30.898884 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898893 | orchestrator | 2025-02-10 09:31:30.898914 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-10 09:31:30.898919 | orchestrator | Monday 10 February 2025 09:30:32 +0000 (0:00:00.722) 0:13:26.183 ******* 2025-02-10 09:31:30.898924 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898928 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898933 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.898938 | orchestrator | 2025-02-10 09:31:30.898943 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-10 09:31:30.898951 | orchestrator | Monday 10 February 2025 09:30:33 +0000 (0:00:00.991) 0:13:27.175 ******* 2025-02-10 09:31:30.898956 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.898961 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.898968 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.898976 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.898984 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.898998 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899006 | orchestrator | 2025-02-10 09:31:30.899014 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-10 09:31:30.899019 | orchestrator | Monday 10 February 2025 09:30:34 +0000 (0:00:00.838) 0:13:28.013 ******* 2025-02-10 09:31:30.899024 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899029 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899036 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899044 | orchestrator | 2025-02-10 09:31:30.899052 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-10 09:31:30.899059 | orchestrator | Monday 10 February 2025 09:30:35 +0000 (0:00:00.932) 0:13:28.945 ******* 2025-02-10 09:31:30.899066 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899074 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899081 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899089 | orchestrator | 2025-02-10 09:31:30.899096 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-02-10 09:31:30.899104 | orchestrator | Monday 10 February 2025 09:30:36 +0000 (0:00:00.640) 0:13:29.586 ******* 2025-02-10 09:31:30.899111 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.899118 | orchestrator | 2025-02-10 09:31:30.899126 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-02-10 09:31:30.899134 | orchestrator | Monday 10 February 2025 09:30:37 +0000 (0:00:00.896) 0:13:30.482 ******* 2025-02-10 09:31:30.899142 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-02-10 09:31:30.899150 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-02-10 09:31:30.899157 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-02-10 09:31:30.899165 | orchestrator | 2025-02-10 09:31:30.899173 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-02-10 09:31:30.899181 | orchestrator | Monday 10 February 2025 09:30:37 +0000 (0:00:00.742) 0:13:31.224 ******* 2025-02-10 09:31:30.899190 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:31:30.899195 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.899200 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:31:30.899204 | orchestrator | 2025-02-10 09:31:30.899209 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-02-10 09:31:30.899214 | orchestrator | Monday 10 February 2025 09:30:39 +0000 (0:00:01.886) 0:13:33.111 ******* 2025-02-10 09:31:30.899219 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:31:30.899224 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-10 09:31:30.899228 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.899233 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:31:30.899238 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:31:30.899243 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-10 09:31:30.899248 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-10 09:31:30.899252 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.899257 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.899262 | orchestrator | 2025-02-10 09:31:30.899267 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-02-10 09:31:30.899271 | orchestrator | Monday 10 February 2025 09:30:41 +0000 (0:00:01.246) 0:13:34.358 ******* 2025-02-10 09:31:30.899276 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899281 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899286 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899291 | orchestrator | 2025-02-10 09:31:30.899297 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-02-10 09:31:30.899304 | orchestrator | Monday 10 February 2025 09:30:41 +0000 (0:00:00.696) 0:13:35.054 ******* 2025-02-10 09:31:30.899313 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899329 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899337 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899345 | orchestrator | 2025-02-10 09:31:30.899353 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-02-10 09:31:30.899361 | orchestrator | Monday 10 February 2025 09:30:42 +0000 (0:00:00.354) 0:13:35.409 ******* 2025-02-10 09:31:30.899369 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-02-10 09:31:30.899377 | orchestrator | 2025-02-10 09:31:30.899384 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-02-10 09:31:30.899396 | orchestrator | Monday 10 February 2025 09:30:42 +0000 (0:00:00.335) 0:13:35.745 ******* 2025-02-10 09:31:30.899405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899433 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899438 | orchestrator | 2025-02-10 09:31:30.899443 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-02-10 09:31:30.899448 | orchestrator | Monday 10 February 2025 09:30:43 +0000 (0:00:00.789) 0:13:36.534 ******* 2025-02-10 09:31:30.899452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899480 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899485 | orchestrator | 2025-02-10 09:31:30.899490 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-02-10 09:31:30.899495 | orchestrator | Monday 10 February 2025 09:30:44 +0000 (0:00:01.237) 0:13:37.772 ******* 2025-02-10 09:31:30.899499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-10 09:31:30.899523 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899528 | orchestrator | 2025-02-10 09:31:30.899533 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-02-10 09:31:30.899541 | orchestrator | Monday 10 February 2025 09:30:45 +0000 (0:00:00.706) 0:13:38.479 ******* 2025-02-10 09:31:30.899546 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:31:30.899552 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:31:30.899557 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:31:30.899562 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:31:30.899567 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-10 09:31:30.899571 | orchestrator | 2025-02-10 09:31:30.899576 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-02-10 09:31:30.899581 | orchestrator | Monday 10 February 2025 09:31:10 +0000 (0:00:25.350) 0:14:03.830 ******* 2025-02-10 09:31:30.899586 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899591 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899595 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899600 | orchestrator | 2025-02-10 09:31:30.899605 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-02-10 09:31:30.899610 | orchestrator | Monday 10 February 2025 09:31:11 +0000 (0:00:00.522) 0:14:04.352 ******* 2025-02-10 09:31:30.899614 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899619 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899624 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899629 | orchestrator | 2025-02-10 09:31:30.899633 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-02-10 09:31:30.899638 | orchestrator | Monday 10 February 2025 09:31:11 +0000 (0:00:00.402) 0:14:04.755 ******* 2025-02-10 09:31:30.899643 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.899648 | orchestrator | 2025-02-10 09:31:30.899653 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-02-10 09:31:30.899657 | orchestrator | Monday 10 February 2025 09:31:12 +0000 (0:00:00.693) 0:14:05.448 ******* 2025-02-10 09:31:30.899662 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.899667 | orchestrator | 2025-02-10 09:31:30.899672 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-02-10 09:31:30.899679 | orchestrator | Monday 10 February 2025 09:31:12 +0000 (0:00:00.895) 0:14:06.344 ******* 2025-02-10 09:31:30.899684 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.899689 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.899693 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.899698 | orchestrator | 2025-02-10 09:31:30.899703 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-02-10 09:31:30.899708 | orchestrator | Monday 10 February 2025 09:31:14 +0000 (0:00:01.258) 0:14:07.603 ******* 2025-02-10 09:31:30.899712 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.899717 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.899722 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.899727 | orchestrator | 2025-02-10 09:31:30.899731 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-02-10 09:31:30.899736 | orchestrator | Monday 10 February 2025 09:31:15 +0000 (0:00:01.287) 0:14:08.890 ******* 2025-02-10 09:31:30.899741 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.899745 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.899753 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.899758 | orchestrator | 2025-02-10 09:31:30.899763 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-02-10 09:31:30.899768 | orchestrator | Monday 10 February 2025 09:31:17 +0000 (0:00:02.285) 0:14:11.176 ******* 2025-02-10 09:31:30.899773 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.899778 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.899783 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-10 09:31:30.899788 | orchestrator | 2025-02-10 09:31:30.899792 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-02-10 09:31:30.899797 | orchestrator | Monday 10 February 2025 09:31:20 +0000 (0:00:02.305) 0:14:13.481 ******* 2025-02-10 09:31:30.899802 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.899807 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:31:30.899811 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:31:30.899817 | orchestrator | 2025-02-10 09:31:30.899825 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-10 09:31:30.899832 | orchestrator | Monday 10 February 2025 09:31:21 +0000 (0:00:01.553) 0:14:15.034 ******* 2025-02-10 09:31:30.899840 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.899847 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.899856 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.899864 | orchestrator | 2025-02-10 09:31:30.899872 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-10 09:31:30.899877 | orchestrator | Monday 10 February 2025 09:31:22 +0000 (0:00:00.814) 0:14:15.849 ******* 2025-02-10 09:31:30.899882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:31:30.899887 | orchestrator | 2025-02-10 09:31:30.899891 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-10 09:31:30.899911 | orchestrator | Monday 10 February 2025 09:31:23 +0000 (0:00:00.915) 0:14:16.765 ******* 2025-02-10 09:31:30.899916 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.899921 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.899926 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.899930 | orchestrator | 2025-02-10 09:31:30.899935 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-10 09:31:30.899944 | orchestrator | Monday 10 February 2025 09:31:23 +0000 (0:00:00.414) 0:14:17.179 ******* 2025-02-10 09:31:30.899949 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.899954 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.899962 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.899967 | orchestrator | 2025-02-10 09:31:30.899972 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-10 09:31:30.899977 | orchestrator | Monday 10 February 2025 09:31:25 +0000 (0:00:01.324) 0:14:18.503 ******* 2025-02-10 09:31:30.899982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:31:30.900001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:31:30.900006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:31:30.900011 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:31:30.900016 | orchestrator | 2025-02-10 09:31:30.900021 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-10 09:31:30.900026 | orchestrator | Monday 10 February 2025 09:31:26 +0000 (0:00:01.114) 0:14:19.618 ******* 2025-02-10 09:31:30.900030 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:31:30.900035 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:31:30.900040 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:31:30.900045 | orchestrator | 2025-02-10 09:31:30.900050 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-10 09:31:30.900058 | orchestrator | Monday 10 February 2025 09:31:26 +0000 (0:00:00.446) 0:14:20.064 ******* 2025-02-10 09:31:30.900063 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:31:30.900068 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:31:30.900073 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:31:30.900078 | orchestrator | 2025-02-10 09:31:30.900083 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:31:30.900088 | orchestrator | testbed-node-0 : ok=120  changed=33  unreachable=0 failed=0 skipped=274  rescued=0 ignored=0 2025-02-10 09:31:30.900097 | orchestrator | testbed-node-1 : ok=116  changed=32  unreachable=0 failed=0 skipped=263  rescued=0 ignored=0 2025-02-10 09:31:30.900105 | orchestrator | testbed-node-2 : ok=123  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-02-10 09:31:33.892061 | orchestrator | testbed-node-3 : ok=184  changed=50  unreachable=0 failed=0 skipped=366  rescued=0 ignored=0 2025-02-10 09:31:33.892194 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=310  rescued=0 ignored=0 2025-02-10 09:31:33.892210 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=308  rescued=0 ignored=0 2025-02-10 09:31:33.892223 | orchestrator | 2025-02-10 09:31:33.892234 | orchestrator | 2025-02-10 09:31:33.892245 | orchestrator | 2025-02-10 09:31:33.892258 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:31:33.892280 | orchestrator | Monday 10 February 2025 09:31:28 +0000 (0:00:01.454) 0:14:21.519 ******* 2025-02-10 09:31:33.892292 | orchestrator | =============================================================================== 2025-02-10 09:31:33.892303 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.32s 2025-02-10 09:31:33.892315 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 25.35s 2025-02-10 09:31:33.892326 | orchestrator | ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy image -- 23.53s 2025-02-10 09:31:33.892339 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.74s 2025-02-10 09:31:33.892351 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.32s 2025-02-10 09:31:33.892362 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 14.11s 2025-02-10 09:31:33.892373 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.85s 2025-02-10 09:31:33.892384 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.27s 2025-02-10 09:31:33.892395 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.60s 2025-02-10 09:31:33.892407 | orchestrator | ceph-config : create ceph initial directories --------------------------- 7.49s 2025-02-10 09:31:33.892418 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.17s 2025-02-10 09:31:33.892429 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 7.06s 2025-02-10 09:31:33.892440 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 6.91s 2025-02-10 09:31:33.892451 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 6.23s 2025-02-10 09:31:33.892462 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.82s 2025-02-10 09:31:33.892474 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 5.04s 2025-02-10 09:31:33.892485 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 4.47s 2025-02-10 09:31:33.892497 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 4.25s 2025-02-10 09:31:33.892508 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 4.14s 2025-02-10 09:31:33.892553 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.80s 2025-02-10 09:31:33.892566 | orchestrator | 2025-02-10 09:31:30 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:33.892577 | orchestrator | 2025-02-10 09:31:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:33.892606 | orchestrator | 2025-02-10 09:31:33 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:33.896697 | orchestrator | 2025-02-10 09:31:33 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:33.896775 | orchestrator | 2025-02-10 09:31:33 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:36.942005 | orchestrator | 2025-02-10 09:31:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:36.942213 | orchestrator | 2025-02-10 09:31:36 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:36.942556 | orchestrator | 2025-02-10 09:31:36 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:39.986192 | orchestrator | 2025-02-10 09:31:36 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:39.986335 | orchestrator | 2025-02-10 09:31:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:39.986375 | orchestrator | 2025-02-10 09:31:39 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:39.986852 | orchestrator | 2025-02-10 09:31:39 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:39.986940 | orchestrator | 2025-02-10 09:31:39 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:43.033228 | orchestrator | 2025-02-10 09:31:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:43.033367 | orchestrator | 2025-02-10 09:31:43 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:43.033512 | orchestrator | 2025-02-10 09:31:43 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:43.035802 | orchestrator | 2025-02-10 09:31:43 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:46.090737 | orchestrator | 2025-02-10 09:31:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:46.090999 | orchestrator | 2025-02-10 09:31:46 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:49.138993 | orchestrator | 2025-02-10 09:31:46 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:49.139138 | orchestrator | 2025-02-10 09:31:46 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:49.139301 | orchestrator | 2025-02-10 09:31:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:49.139402 | orchestrator | 2025-02-10 09:31:49 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:52.175486 | orchestrator | 2025-02-10 09:31:49 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:52.175626 | orchestrator | 2025-02-10 09:31:49 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:52.175646 | orchestrator | 2025-02-10 09:31:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:52.175679 | orchestrator | 2025-02-10 09:31:52 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:55.225672 | orchestrator | 2025-02-10 09:31:52 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:55.225853 | orchestrator | 2025-02-10 09:31:52 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:55.225875 | orchestrator | 2025-02-10 09:31:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:55.226214 | orchestrator | 2025-02-10 09:31:55 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:55.226419 | orchestrator | 2025-02-10 09:31:55 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:55.226543 | orchestrator | 2025-02-10 09:31:55 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:31:58.282361 | orchestrator | 2025-02-10 09:31:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:31:58.282536 | orchestrator | 2025-02-10 09:31:58 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:31:58.284573 | orchestrator | 2025-02-10 09:31:58 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:31:58.287644 | orchestrator | 2025-02-10 09:31:58 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:01.318320 | orchestrator | 2025-02-10 09:31:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:01.318490 | orchestrator | 2025-02-10 09:32:01 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:01.319956 | orchestrator | 2025-02-10 09:32:01 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:01.320010 | orchestrator | 2025-02-10 09:32:01 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:04.378235 | orchestrator | 2025-02-10 09:32:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:04.378469 | orchestrator | 2025-02-10 09:32:04 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:04.381289 | orchestrator | 2025-02-10 09:32:04 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:04.381397 | orchestrator | 2025-02-10 09:32:04 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:07.442424 | orchestrator | 2025-02-10 09:32:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:07.442590 | orchestrator | 2025-02-10 09:32:07 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:07.443028 | orchestrator | 2025-02-10 09:32:07 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:10.483629 | orchestrator | 2025-02-10 09:32:07 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:10.483980 | orchestrator | 2025-02-10 09:32:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:10.484042 | orchestrator | 2025-02-10 09:32:10 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:10.485562 | orchestrator | 2025-02-10 09:32:10 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:10.485599 | orchestrator | 2025-02-10 09:32:10 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:10.485622 | orchestrator | 2025-02-10 09:32:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:13.521056 | orchestrator | 2025-02-10 09:32:13 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:13.525161 | orchestrator | 2025-02-10 09:32:13 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:13.529245 | orchestrator | 2025-02-10 09:32:13 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:16.569595 | orchestrator | 2025-02-10 09:32:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:16.569732 | orchestrator | 2025-02-10 09:32:16 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:16.569966 | orchestrator | 2025-02-10 09:32:16 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:16.569990 | orchestrator | 2025-02-10 09:32:16 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:16.571582 | orchestrator | 2025-02-10 09:32:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:19.631104 | orchestrator | 2025-02-10 09:32:19 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:19.635564 | orchestrator | 2025-02-10 09:32:19 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:19.642217 | orchestrator | 2025-02-10 09:32:19 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:22.688400 | orchestrator | 2025-02-10 09:32:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:22.688560 | orchestrator | 2025-02-10 09:32:22 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:22.689620 | orchestrator | 2025-02-10 09:32:22 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:22.689663 | orchestrator | 2025-02-10 09:32:22 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:25.732820 | orchestrator | 2025-02-10 09:32:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:25.733038 | orchestrator | 2025-02-10 09:32:25 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:25.734532 | orchestrator | 2025-02-10 09:32:25 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:25.736211 | orchestrator | 2025-02-10 09:32:25 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:28.781095 | orchestrator | 2025-02-10 09:32:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:28.781259 | orchestrator | 2025-02-10 09:32:28 | INFO  | Task f18d6425-66fb-49db-9bdc-3764089b22de is in state STARTED 2025-02-10 09:32:28.782205 | orchestrator | 2025-02-10 09:32:28 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:28.784753 | orchestrator | 2025-02-10 09:32:28 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:28.788148 | orchestrator | 2025-02-10 09:32:28 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:31.846550 | orchestrator | 2025-02-10 09:32:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:31.846791 | orchestrator | 2025-02-10 09:32:31 | INFO  | Task f18d6425-66fb-49db-9bdc-3764089b22de is in state STARTED 2025-02-10 09:32:31.847030 | orchestrator | 2025-02-10 09:32:31 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:31.847099 | orchestrator | 2025-02-10 09:32:31 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:31.848373 | orchestrator | 2025-02-10 09:32:31 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:31.848504 | orchestrator | 2025-02-10 09:32:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:34.903727 | orchestrator | 2025-02-10 09:32:34 | INFO  | Task f18d6425-66fb-49db-9bdc-3764089b22de is in state STARTED 2025-02-10 09:32:34.905864 | orchestrator | 2025-02-10 09:32:34 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:34.905975 | orchestrator | 2025-02-10 09:32:34 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:34.906189 | orchestrator | 2025-02-10 09:32:34 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:37.953422 | orchestrator | 2025-02-10 09:32:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:37.953580 | orchestrator | 2025-02-10 09:32:37 | INFO  | Task f18d6425-66fb-49db-9bdc-3764089b22de is in state STARTED 2025-02-10 09:32:37.955093 | orchestrator | 2025-02-10 09:32:37 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:37.955130 | orchestrator | 2025-02-10 09:32:37 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:37.955522 | orchestrator | 2025-02-10 09:32:37 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:37.955744 | orchestrator | 2025-02-10 09:32:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:41.002664 | orchestrator | 2025-02-10 09:32:40 | INFO  | Task f18d6425-66fb-49db-9bdc-3764089b22de is in state STARTED 2025-02-10 09:32:41.004117 | orchestrator | 2025-02-10 09:32:40 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:41.004182 | orchestrator | 2025-02-10 09:32:41 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:41.006410 | orchestrator | 2025-02-10 09:32:41 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:44.060244 | orchestrator | 2025-02-10 09:32:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:44.060408 | orchestrator | 2025-02-10 09:32:44 | INFO  | Task f18d6425-66fb-49db-9bdc-3764089b22de is in state SUCCESS 2025-02-10 09:32:44.062264 | orchestrator | 2025-02-10 09:32:44 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:44.062393 | orchestrator | 2025-02-10 09:32:44 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:44.064531 | orchestrator | 2025-02-10 09:32:44 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:47.099318 | orchestrator | 2025-02-10 09:32:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:47.099570 | orchestrator | 2025-02-10 09:32:47 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:47.100025 | orchestrator | 2025-02-10 09:32:47 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:47.101051 | orchestrator | 2025-02-10 09:32:47 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:50.147804 | orchestrator | 2025-02-10 09:32:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:50.147970 | orchestrator | 2025-02-10 09:32:50 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:50.148424 | orchestrator | 2025-02-10 09:32:50 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:50.148454 | orchestrator | 2025-02-10 09:32:50 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:53.191376 | orchestrator | 2025-02-10 09:32:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:53.191521 | orchestrator | 2025-02-10 09:32:53 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:53.192493 | orchestrator | 2025-02-10 09:32:53 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:53.193630 | orchestrator | 2025-02-10 09:32:53 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:56.236443 | orchestrator | 2025-02-10 09:32:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:56.236609 | orchestrator | 2025-02-10 09:32:56 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:56.237967 | orchestrator | 2025-02-10 09:32:56 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:56.238687 | orchestrator | 2025-02-10 09:32:56 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:32:59.283162 | orchestrator | 2025-02-10 09:32:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:32:59.283347 | orchestrator | 2025-02-10 09:32:59 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:32:59.285006 | orchestrator | 2025-02-10 09:32:59 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:32:59.285053 | orchestrator | 2025-02-10 09:32:59 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:02.327636 | orchestrator | 2025-02-10 09:32:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:02.327751 | orchestrator | 2025-02-10 09:33:02 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:02.330746 | orchestrator | 2025-02-10 09:33:02 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:33:05.378477 | orchestrator | 2025-02-10 09:33:02 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:05.378595 | orchestrator | 2025-02-10 09:33:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:05.378621 | orchestrator | 2025-02-10 09:33:05 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:05.379719 | orchestrator | 2025-02-10 09:33:05 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:33:05.380749 | orchestrator | 2025-02-10 09:33:05 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:08.427680 | orchestrator | 2025-02-10 09:33:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:08.427854 | orchestrator | 2025-02-10 09:33:08 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:08.428559 | orchestrator | 2025-02-10 09:33:08 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:33:08.428600 | orchestrator | 2025-02-10 09:33:08 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:11.468500 | orchestrator | 2025-02-10 09:33:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:11.468658 | orchestrator | 2025-02-10 09:33:11 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:11.471269 | orchestrator | 2025-02-10 09:33:11 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state STARTED 2025-02-10 09:33:11.480686 | orchestrator | 2025-02-10 09:33:11 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:14.534326 | orchestrator | 2025-02-10 09:33:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:14.534481 | orchestrator | 2025-02-10 09:33:14 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:14.534912 | orchestrator | 2025-02-10 09:33:14 | INFO  | Task be750918-c9c9-40ab-b721-ac3d9efe2fcb is in state SUCCESS 2025-02-10 09:33:14.536863 | orchestrator | 2025-02-10 09:33:14.536920 | orchestrator | None 2025-02-10 09:33:14.536959 | orchestrator | 2025-02-10 09:33:14.536975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:33:14.536989 | orchestrator | 2025-02-10 09:33:14.537002 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:33:14.537032 | orchestrator | Monday 10 February 2025 09:31:25 +0000 (0:00:00.414) 0:00:00.414 ******* 2025-02-10 09:33:14.537045 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.537060 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.537072 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.537085 | orchestrator | 2025-02-10 09:33:14.537097 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:33:14.537110 | orchestrator | Monday 10 February 2025 09:31:26 +0000 (0:00:00.525) 0:00:00.940 ******* 2025-02-10 09:33:14.537122 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-02-10 09:33:14.537135 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-02-10 09:33:14.537148 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-02-10 09:33:14.537160 | orchestrator | 2025-02-10 09:33:14.537173 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-02-10 09:33:14.537185 | orchestrator | 2025-02-10 09:33:14.537197 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:33:14.537210 | orchestrator | Monday 10 February 2025 09:31:26 +0000 (0:00:00.632) 0:00:01.572 ******* 2025-02-10 09:33:14.537223 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:14.537237 | orchestrator | 2025-02-10 09:33:14.537250 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-02-10 09:33:14.537263 | orchestrator | Monday 10 February 2025 09:31:27 +0000 (0:00:00.749) 0:00:02.322 ******* 2025-02-10 09:33:14.537280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.537328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.537344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.537365 | orchestrator | 2025-02-10 09:33:14.537377 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-02-10 09:33:14.537390 | orchestrator | Monday 10 February 2025 09:31:29 +0000 (0:00:01.975) 0:00:04.297 ******* 2025-02-10 09:33:14.537403 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.537416 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.537428 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.537441 | orchestrator | 2025-02-10 09:33:14.537454 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:33:14.537469 | orchestrator | Monday 10 February 2025 09:31:29 +0000 (0:00:00.305) 0:00:04.602 ******* 2025-02-10 09:33:14.537489 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-10 09:33:14.537503 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-02-10 09:33:14.537517 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-02-10 09:33:14.537531 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-10 09:33:14.537545 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-02-10 09:33:14.537559 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-02-10 09:33:14.537572 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-02-10 09:33:14.537902 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-02-10 09:33:14.537921 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-02-10 09:33:14.537934 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-10 09:33:14.537979 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-02-10 09:33:14.538001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-02-10 09:33:14.538155 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-02-10 09:33:14.538176 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-02-10 09:33:14.538189 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-02-10 09:33:14.538201 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-02-10 09:33:14.538213 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-02-10 09:33:14.538226 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-02-10 09:33:14.538239 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-02-10 09:33:14.538253 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-02-10 09:33:14.538266 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-02-10 09:33:14.538278 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-02-10 09:33:14.538291 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-02-10 09:33:14.538316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ironic', 'enabled': True}) 2025-02-10 09:33:14.538328 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-02-10 09:33:14.538341 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-02-10 09:33:14.538353 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-02-10 09:33:14.538365 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-02-10 09:33:14.538377 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-02-10 09:33:14.538390 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-02-10 09:33:14.538402 | orchestrator | 2025-02-10 09:33:14.538414 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.538427 | orchestrator | Monday 10 February 2025 09:31:30 +0000 (0:00:01.077) 0:00:05.679 ******* 2025-02-10 09:33:14.538439 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.538452 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.538464 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.538477 | orchestrator | 2025-02-10 09:33:14.538498 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.538511 | orchestrator | Monday 10 February 2025 09:31:31 +0000 (0:00:00.477) 0:00:06.157 ******* 2025-02-10 09:33:14.538523 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.538537 | orchestrator | 2025-02-10 09:33:14.538550 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.538570 | orchestrator | Monday 10 February 2025 09:31:31 +0000 (0:00:00.120) 0:00:06.278 ******* 2025-02-10 09:33:14.538583 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.538596 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.538608 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.538620 | orchestrator | 2025-02-10 09:33:14.538633 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.538645 | orchestrator | Monday 10 February 2025 09:31:32 +0000 (0:00:00.458) 0:00:06.737 ******* 2025-02-10 09:33:14.538657 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.538669 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.538682 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.538694 | orchestrator | 2025-02-10 09:33:14.538706 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.538718 | orchestrator | Monday 10 February 2025 09:31:32 +0000 (0:00:00.427) 0:00:07.165 ******* 2025-02-10 09:33:14.538731 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.538750 | orchestrator | 2025-02-10 09:33:14.538773 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.538793 | orchestrator | Monday 10 February 2025 09:31:32 +0000 (0:00:00.269) 0:00:07.435 ******* 2025-02-10 09:33:14.538815 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.538837 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.538861 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.538877 | orchestrator | 2025-02-10 09:33:14.538891 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.538905 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:00.326) 0:00:07.762 ******* 2025-02-10 09:33:14.538919 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.538963 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.538978 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.538992 | orchestrator | 2025-02-10 09:33:14.539006 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.539021 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:00.545) 0:00:08.307 ******* 2025-02-10 09:33:14.539034 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539048 | orchestrator | 2025-02-10 09:33:14.539062 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.539075 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:00.161) 0:00:08.469 ******* 2025-02-10 09:33:14.539089 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539102 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.539116 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.539130 | orchestrator | 2025-02-10 09:33:14.539142 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.539154 | orchestrator | Monday 10 February 2025 09:31:34 +0000 (0:00:00.501) 0:00:08.971 ******* 2025-02-10 09:33:14.539166 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.539179 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.539191 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.539203 | orchestrator | 2025-02-10 09:33:14.539215 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.539228 | orchestrator | Monday 10 February 2025 09:31:34 +0000 (0:00:00.499) 0:00:09.471 ******* 2025-02-10 09:33:14.539240 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539252 | orchestrator | 2025-02-10 09:33:14.539264 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.539277 | orchestrator | Monday 10 February 2025 09:31:34 +0000 (0:00:00.199) 0:00:09.670 ******* 2025-02-10 09:33:14.539289 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539301 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.539313 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.539325 | orchestrator | 2025-02-10 09:33:14.539338 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.539350 | orchestrator | Monday 10 February 2025 09:31:35 +0000 (0:00:00.514) 0:00:10.184 ******* 2025-02-10 09:33:14.539362 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.539375 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.539387 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.539399 | orchestrator | 2025-02-10 09:33:14.539411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.539423 | orchestrator | Monday 10 February 2025 09:31:35 +0000 (0:00:00.477) 0:00:10.662 ******* 2025-02-10 09:33:14.539436 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539448 | orchestrator | 2025-02-10 09:33:14.539460 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.539472 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:00.143) 0:00:10.805 ******* 2025-02-10 09:33:14.539484 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539496 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.539508 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.539520 | orchestrator | 2025-02-10 09:33:14.539533 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.539545 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:00.501) 0:00:11.307 ******* 2025-02-10 09:33:14.539557 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.539576 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.539589 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.539601 | orchestrator | 2025-02-10 09:33:14.539614 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.539630 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:00.337) 0:00:11.645 ******* 2025-02-10 09:33:14.539643 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539655 | orchestrator | 2025-02-10 09:33:14.539667 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.539686 | orchestrator | Monday 10 February 2025 09:31:37 +0000 (0:00:00.278) 0:00:11.923 ******* 2025-02-10 09:33:14.539698 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539710 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.539723 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.539735 | orchestrator | 2025-02-10 09:33:14.539747 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.539759 | orchestrator | Monday 10 February 2025 09:31:37 +0000 (0:00:00.424) 0:00:12.348 ******* 2025-02-10 09:33:14.539771 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.539784 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.539796 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.539808 | orchestrator | 2025-02-10 09:33:14.539828 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.539841 | orchestrator | Monday 10 February 2025 09:31:38 +0000 (0:00:00.608) 0:00:12.956 ******* 2025-02-10 09:33:14.539853 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539866 | orchestrator | 2025-02-10 09:33:14.539878 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.539890 | orchestrator | Monday 10 February 2025 09:31:38 +0000 (0:00:00.147) 0:00:13.103 ******* 2025-02-10 09:33:14.539902 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.539914 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.539927 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.540019 | orchestrator | 2025-02-10 09:33:14.540040 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.540057 | orchestrator | Monday 10 February 2025 09:31:39 +0000 (0:00:00.710) 0:00:13.813 ******* 2025-02-10 09:33:14.540077 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.540098 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.540117 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.540137 | orchestrator | 2025-02-10 09:33:14.540154 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.540166 | orchestrator | Monday 10 February 2025 09:31:39 +0000 (0:00:00.590) 0:00:14.404 ******* 2025-02-10 09:33:14.540179 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540191 | orchestrator | 2025-02-10 09:33:14.540204 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.540216 | orchestrator | Monday 10 February 2025 09:31:39 +0000 (0:00:00.143) 0:00:14.548 ******* 2025-02-10 09:33:14.540229 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540241 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.540254 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.540266 | orchestrator | 2025-02-10 09:33:14.540278 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.540291 | orchestrator | Monday 10 February 2025 09:31:40 +0000 (0:00:00.531) 0:00:15.079 ******* 2025-02-10 09:33:14.540303 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.540315 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.540328 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.540339 | orchestrator | 2025-02-10 09:33:14.540349 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.540359 | orchestrator | Monday 10 February 2025 09:31:40 +0000 (0:00:00.381) 0:00:15.460 ******* 2025-02-10 09:33:14.540369 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540380 | orchestrator | 2025-02-10 09:33:14.540390 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.540400 | orchestrator | Monday 10 February 2025 09:31:41 +0000 (0:00:00.281) 0:00:15.742 ******* 2025-02-10 09:33:14.540410 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540420 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.540430 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.540440 | orchestrator | 2025-02-10 09:33:14.540450 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.540468 | orchestrator | Monday 10 February 2025 09:31:41 +0000 (0:00:00.308) 0:00:16.051 ******* 2025-02-10 09:33:14.540478 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.540489 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.540499 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.540509 | orchestrator | 2025-02-10 09:33:14.540519 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.540529 | orchestrator | Monday 10 February 2025 09:31:42 +0000 (0:00:00.744) 0:00:16.795 ******* 2025-02-10 09:33:14.540539 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540549 | orchestrator | 2025-02-10 09:33:14.540560 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.540570 | orchestrator | Monday 10 February 2025 09:31:42 +0000 (0:00:00.391) 0:00:17.187 ******* 2025-02-10 09:33:14.540580 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540590 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.540600 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.540610 | orchestrator | 2025-02-10 09:33:14.540620 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.540630 | orchestrator | Monday 10 February 2025 09:31:43 +0000 (0:00:00.661) 0:00:17.848 ******* 2025-02-10 09:33:14.540641 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.540651 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.540661 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.540671 | orchestrator | 2025-02-10 09:33:14.540681 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.540691 | orchestrator | Monday 10 February 2025 09:31:43 +0000 (0:00:00.506) 0:00:18.354 ******* 2025-02-10 09:33:14.540701 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540711 | orchestrator | 2025-02-10 09:33:14.540721 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.540737 | orchestrator | Monday 10 February 2025 09:31:43 +0000 (0:00:00.147) 0:00:18.502 ******* 2025-02-10 09:33:14.540747 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540757 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.540768 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.540778 | orchestrator | 2025-02-10 09:33:14.540788 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-10 09:33:14.540798 | orchestrator | Monday 10 February 2025 09:31:44 +0000 (0:00:00.705) 0:00:19.207 ******* 2025-02-10 09:33:14.540809 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:33:14.540819 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:33:14.540829 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:33:14.540839 | orchestrator | 2025-02-10 09:33:14.540849 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-10 09:33:14.540859 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:00.563) 0:00:19.771 ******* 2025-02-10 09:33:14.540870 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540880 | orchestrator | 2025-02-10 09:33:14.540890 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-10 09:33:14.540900 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:00.130) 0:00:19.902 ******* 2025-02-10 09:33:14.540910 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.540926 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.540958 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.540970 | orchestrator | 2025-02-10 09:33:14.540980 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-02-10 09:33:14.540990 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:00.341) 0:00:20.243 ******* 2025-02-10 09:33:14.541000 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:14.541010 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:14.541020 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:14.541035 | orchestrator | 2025-02-10 09:33:14.541046 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-02-10 09:33:14.541062 | orchestrator | Monday 10 February 2025 09:31:49 +0000 (0:00:03.623) 0:00:23.867 ******* 2025-02-10 09:33:14.541072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-10 09:33:14.541082 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-10 09:33:14.541093 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-10 09:33:14.541103 | orchestrator | 2025-02-10 09:33:14.541113 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-02-10 09:33:14.541123 | orchestrator | Monday 10 February 2025 09:31:52 +0000 (0:00:03.410) 0:00:27.277 ******* 2025-02-10 09:33:14.541133 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-10 09:33:14.541143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-10 09:33:14.541153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-10 09:33:14.541164 | orchestrator | 2025-02-10 09:33:14.541174 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-02-10 09:33:14.541184 | orchestrator | Monday 10 February 2025 09:31:56 +0000 (0:00:03.588) 0:00:30.866 ******* 2025-02-10 09:33:14.541194 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-10 09:33:14.541204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-10 09:33:14.541214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-10 09:33:14.541224 | orchestrator | 2025-02-10 09:33:14.541234 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-02-10 09:33:14.541244 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:02.999) 0:00:33.866 ******* 2025-02-10 09:33:14.541254 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.541264 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.541275 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.541284 | orchestrator | 2025-02-10 09:33:14.541295 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-02-10 09:33:14.541305 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:00.481) 0:00:34.347 ******* 2025-02-10 09:33:14.541315 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.541325 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.541335 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.541345 | orchestrator | 2025-02-10 09:33:14.541355 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:33:14.541365 | orchestrator | Monday 10 February 2025 09:32:00 +0000 (0:00:00.462) 0:00:34.810 ******* 2025-02-10 09:33:14.541375 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:14.541386 | orchestrator | 2025-02-10 09:33:14.541396 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-02-10 09:33:14.541406 | orchestrator | Monday 10 February 2025 09:32:01 +0000 (0:00:00.943) 0:00:35.753 ******* 2025-02-10 09:33:14.541426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.541456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.541475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.541493 | orchestrator | 2025-02-10 09:33:14.541503 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-02-10 09:33:14.541514 | orchestrator | Monday 10 February 2025 09:32:04 +0000 (0:00:03.112) 0:00:38.865 ******* 2025-02-10 09:33:14.541524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:33:14.541544 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.541563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:33:14.541575 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.541586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:33:14.541603 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.541613 | orchestrator | 2025-02-10 09:33:14.541624 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-02-10 09:33:14.541634 | orchestrator | Monday 10 February 2025 09:32:06 +0000 (0:00:02.051) 0:00:40.917 ******* 2025-02-10 09:33:14.541651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:33:14.541663 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.541679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:33:14.541696 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.541706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-10 09:33:14.541717 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.541727 | orchestrator | 2025-02-10 09:33:14.541738 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-02-10 09:33:14.541748 | orchestrator | Monday 10 February 2025 09:32:07 +0000 (0:00:01.702) 0:00:42.619 ******* 2025-02-10 09:33:14.541765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.541788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.541811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-10 09:33:14.541826 | orchestrator | 2025-02-10 09:33:14.541837 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:33:14.541847 | orchestrator | Monday 10 February 2025 09:32:15 +0000 (0:00:07.303) 0:00:49.923 ******* 2025-02-10 09:33:14.541857 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:33:14.541868 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:33:14.541878 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:33:14.541888 | orchestrator | 2025-02-10 09:33:14.541898 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-10 09:33:14.541912 | orchestrator | Monday 10 February 2025 09:32:15 +0000 (0:00:00.490) 0:00:50.414 ******* 2025-02-10 09:33:14.541926 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:33:14.541957 | orchestrator | 2025-02-10 09:33:14.541972 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-02-10 09:33:14.541982 | orchestrator | Monday 10 February 2025 09:32:16 +0000 (0:00:01.038) 0:00:51.452 ******* 2025-02-10 09:33:14.541992 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:14.542002 | orchestrator | 2025-02-10 09:33:14.542012 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-02-10 09:33:14.542055 | orchestrator | Monday 10 February 2025 09:32:19 +0000 (0:00:03.024) 0:00:54.476 ******* 2025-02-10 09:33:14.542066 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:14.542076 | orchestrator | 2025-02-10 09:33:14.542086 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-02-10 09:33:14.542096 | orchestrator | Monday 10 February 2025 09:32:22 +0000 (0:00:02.552) 0:00:57.028 ******* 2025-02-10 09:33:14.542106 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:14.542117 | orchestrator | 2025-02-10 09:33:14.542134 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-10 09:33:14.542159 | orchestrator | Monday 10 February 2025 09:32:34 +0000 (0:00:12.081) 0:01:09.110 ******* 2025-02-10 09:33:14.542175 | orchestrator | 2025-02-10 09:33:14.542191 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-10 09:33:14.542207 | orchestrator | Monday 10 February 2025 09:32:34 +0000 (0:00:00.070) 0:01:09.180 ******* 2025-02-10 09:33:14.542222 | orchestrator | 2025-02-10 09:33:14.542239 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-10 09:33:14.542256 | orchestrator | Monday 10 February 2025 09:32:34 +0000 (0:00:00.227) 0:01:09.407 ******* 2025-02-10 09:33:14.542273 | orchestrator | 2025-02-10 09:33:14.542291 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-02-10 09:33:14.542307 | orchestrator | Monday 10 February 2025 09:32:34 +0000 (0:00:00.060) 0:01:09.468 ******* 2025-02-10 09:33:14.542324 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:33:14.542341 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:33:14.542358 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:33:14.542374 | orchestrator | 2025-02-10 09:33:14.542392 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:33:14.542409 | orchestrator | testbed-node-0 : ok=41  changed=11  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-10 09:33:14.542427 | orchestrator | testbed-node-1 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-10 09:33:14.542444 | orchestrator | testbed-node-2 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-10 09:33:14.542461 | orchestrator | 2025-02-10 09:33:14.542479 | orchestrator | 2025-02-10 09:33:14.542496 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:33:14.542512 | orchestrator | Monday 10 February 2025 09:33:12 +0000 (0:00:38.027) 0:01:47.496 ******* 2025-02-10 09:33:14.542529 | orchestrator | =============================================================================== 2025-02-10 09:33:14.542546 | orchestrator | horizon : Restart horizon container ------------------------------------ 38.03s 2025-02-10 09:33:14.542563 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 12.08s 2025-02-10 09:33:14.542579 | orchestrator | horizon : Deploy horizon container -------------------------------------- 7.30s 2025-02-10 09:33:14.542596 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.62s 2025-02-10 09:33:14.542613 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.59s 2025-02-10 09:33:14.542630 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.41s 2025-02-10 09:33:14.542655 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 3.11s 2025-02-10 09:33:17.578574 | orchestrator | horizon : Creating Horizon database ------------------------------------- 3.02s 2025-02-10 09:33:17.578719 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 3.00s 2025-02-10 09:33:17.578740 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.55s 2025-02-10 09:33:17.578755 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 2.05s 2025-02-10 09:33:17.578795 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.98s 2025-02-10 09:33:17.578810 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.70s 2025-02-10 09:33:17.578830 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.08s 2025-02-10 09:33:17.578845 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.04s 2025-02-10 09:33:17.578859 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.94s 2025-02-10 09:33:17.578873 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-02-10 09:33:17.578915 | orchestrator | horizon : Update policy file name --------------------------------------- 0.74s 2025-02-10 09:33:17.578930 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.71s 2025-02-10 09:33:17.578972 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.71s 2025-02-10 09:33:17.578987 | orchestrator | 2025-02-10 09:33:14 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:17.579002 | orchestrator | 2025-02-10 09:33:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:17.579035 | orchestrator | 2025-02-10 09:33:17 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:17.579818 | orchestrator | 2025-02-10 09:33:17 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:20.636925 | orchestrator | 2025-02-10 09:33:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:20.637106 | orchestrator | 2025-02-10 09:33:20 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:23.669403 | orchestrator | 2025-02-10 09:33:20 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:23.669506 | orchestrator | 2025-02-10 09:33:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:23.669527 | orchestrator | 2025-02-10 09:33:23 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:23.669643 | orchestrator | 2025-02-10 09:33:23 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:26.722469 | orchestrator | 2025-02-10 09:33:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:26.722664 | orchestrator | 2025-02-10 09:33:26 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:26.724861 | orchestrator | 2025-02-10 09:33:26 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:29.763032 | orchestrator | 2025-02-10 09:33:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:29.763194 | orchestrator | 2025-02-10 09:33:29 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:32.815684 | orchestrator | 2025-02-10 09:33:29 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:32.815878 | orchestrator | 2025-02-10 09:33:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:32.815927 | orchestrator | 2025-02-10 09:33:32 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:32.816547 | orchestrator | 2025-02-10 09:33:32 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:35.870746 | orchestrator | 2025-02-10 09:33:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:35.870931 | orchestrator | 2025-02-10 09:33:35 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:38.924501 | orchestrator | 2025-02-10 09:33:35 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:38.924715 | orchestrator | 2025-02-10 09:33:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:38.924758 | orchestrator | 2025-02-10 09:33:38 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:38.924905 | orchestrator | 2025-02-10 09:33:38 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:41.977431 | orchestrator | 2025-02-10 09:33:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:41.977691 | orchestrator | 2025-02-10 09:33:41 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:45.044756 | orchestrator | 2025-02-10 09:33:41 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:45.044884 | orchestrator | 2025-02-10 09:33:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:45.044918 | orchestrator | 2025-02-10 09:33:45 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:45.045270 | orchestrator | 2025-02-10 09:33:45 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:48.086699 | orchestrator | 2025-02-10 09:33:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:48.086863 | orchestrator | 2025-02-10 09:33:48 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:51.127857 | orchestrator | 2025-02-10 09:33:48 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:51.128056 | orchestrator | 2025-02-10 09:33:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:51.128096 | orchestrator | 2025-02-10 09:33:51 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state STARTED 2025-02-10 09:33:54.190448 | orchestrator | 2025-02-10 09:33:51 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:54.190598 | orchestrator | 2025-02-10 09:33:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:54.190639 | orchestrator | 2025-02-10 09:33:54 | INFO  | Task da1f81b1-77a5-4e32-9ffc-336c1e185339 is in state SUCCESS 2025-02-10 09:33:54.193041 | orchestrator | 2025-02-10 09:33:54.193223 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:33:54.193272 | orchestrator | 2025-02-10 09:33:54.193288 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-02-10 09:33:54.193303 | orchestrator | 2025-02-10 09:33:54.193318 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-10 09:33:54.193332 | orchestrator | Monday 10 February 2025 09:31:33 +0000 (0:00:01.254) 0:00:01.254 ******* 2025-02-10 09:33:54.193348 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:54.193371 | orchestrator | 2025-02-10 09:33:54.193386 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-10 09:33:54.193401 | orchestrator | Monday 10 February 2025 09:31:34 +0000 (0:00:00.620) 0:00:01.875 ******* 2025-02-10 09:33:54.193416 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:33:54.193430 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:33:54.193445 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:33:54.193459 | orchestrator | 2025-02-10 09:33:54.193473 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-10 09:33:54.193948 | orchestrator | Monday 10 February 2025 09:31:35 +0000 (0:00:00.985) 0:00:02.861 ******* 2025-02-10 09:33:54.193997 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:54.194013 | orchestrator | 2025-02-10 09:33:54.194124 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-10 09:33:54.194140 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:00.767) 0:00:03.628 ******* 2025-02-10 09:33:54.194154 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194170 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194185 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194199 | orchestrator | 2025-02-10 09:33:54.194239 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-10 09:33:54.194255 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:00.792) 0:00:04.421 ******* 2025-02-10 09:33:54.194270 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194313 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194328 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194342 | orchestrator | 2025-02-10 09:33:54.194357 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-10 09:33:54.194371 | orchestrator | Monday 10 February 2025 09:31:37 +0000 (0:00:00.456) 0:00:04.878 ******* 2025-02-10 09:33:54.194385 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194400 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194414 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194428 | orchestrator | 2025-02-10 09:33:54.194443 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-10 09:33:54.194457 | orchestrator | Monday 10 February 2025 09:31:38 +0000 (0:00:01.127) 0:00:06.005 ******* 2025-02-10 09:33:54.194471 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194486 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194500 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194515 | orchestrator | 2025-02-10 09:33:54.194529 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-10 09:33:54.194543 | orchestrator | Monday 10 February 2025 09:31:39 +0000 (0:00:00.590) 0:00:06.596 ******* 2025-02-10 09:33:54.194558 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194572 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194586 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194600 | orchestrator | 2025-02-10 09:33:54.194614 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-10 09:33:54.194629 | orchestrator | Monday 10 February 2025 09:31:39 +0000 (0:00:00.447) 0:00:07.044 ******* 2025-02-10 09:33:54.194643 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194657 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194671 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194684 | orchestrator | 2025-02-10 09:33:54.194698 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-10 09:33:54.194713 | orchestrator | Monday 10 February 2025 09:31:40 +0000 (0:00:00.578) 0:00:07.622 ******* 2025-02-10 09:33:54.194727 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.194742 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.194756 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.194771 | orchestrator | 2025-02-10 09:33:54.194785 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-10 09:33:54.194800 | orchestrator | Monday 10 February 2025 09:31:40 +0000 (0:00:00.350) 0:00:07.973 ******* 2025-02-10 09:33:54.194814 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.194828 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.194842 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.194857 | orchestrator | 2025-02-10 09:33:54.194878 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-10 09:33:54.194893 | orchestrator | Monday 10 February 2025 09:31:40 +0000 (0:00:00.316) 0:00:08.290 ******* 2025-02-10 09:33:54.194907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:33:54.194921 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:54.194935 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:54.194948 | orchestrator | 2025-02-10 09:33:54.194996 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-10 09:33:54.195022 | orchestrator | Monday 10 February 2025 09:31:41 +0000 (0:00:00.991) 0:00:09.282 ******* 2025-02-10 09:33:54.195045 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.195064 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.195078 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.195093 | orchestrator | 2025-02-10 09:33:54.195107 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-10 09:33:54.195121 | orchestrator | Monday 10 February 2025 09:31:42 +0000 (0:00:00.845) 0:00:10.128 ******* 2025-02-10 09:33:54.195150 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:33:54.195176 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:54.195190 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:54.195204 | orchestrator | 2025-02-10 09:33:54.195218 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-10 09:33:54.195232 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:02.388) 0:00:12.516 ******* 2025-02-10 09:33:54.195246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:54.195260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:54.195274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:54.195288 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.195302 | orchestrator | 2025-02-10 09:33:54.195317 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-10 09:33:54.195330 | orchestrator | Monday 10 February 2025 09:31:45 +0000 (0:00:00.513) 0:00:13.030 ******* 2025-02-10 09:33:54.195351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:33:54.195369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:33:54.195383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:33:54.195397 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.195411 | orchestrator | 2025-02-10 09:33:54.195425 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-10 09:33:54.195439 | orchestrator | Monday 10 February 2025 09:31:46 +0000 (0:00:00.770) 0:00:13.800 ******* 2025-02-10 09:33:54.195459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:33:54.195481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:33:54.195496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:33:54.195510 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.195530 | orchestrator | 2025-02-10 09:33:54.195544 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-10 09:33:54.195558 | orchestrator | Monday 10 February 2025 09:31:46 +0000 (0:00:00.247) 0:00:14.047 ******* 2025-02-10 09:33:54.195575 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '4484a4da621b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-10 09:31:43.622118', 'end': '2025-02-10 09:31:43.673968', 'delta': '0:00:00.051850', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4484a4da621b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-10 09:33:54.195612 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '20ef0d984121', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-10 09:31:44.270514', 'end': '2025-02-10 09:31:44.316857', 'delta': '0:00:00.046343', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['20ef0d984121'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-10 09:33:54.195630 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a6a262b85557', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-10 09:31:44.843755', 'end': '2025-02-10 09:31:44.883996', 'delta': '0:00:00.040241', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6a262b85557'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-10 09:33:54.195644 | orchestrator | 2025-02-10 09:33:54.195658 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-10 09:33:54.195673 | orchestrator | Monday 10 February 2025 09:31:46 +0000 (0:00:00.367) 0:00:14.415 ******* 2025-02-10 09:33:54.195687 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.195702 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.195716 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.195730 | orchestrator | 2025-02-10 09:33:54.195744 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-10 09:33:54.195758 | orchestrator | Monday 10 February 2025 09:31:47 +0000 (0:00:00.682) 0:00:15.097 ******* 2025-02-10 09:33:54.195772 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-10 09:33:54.195786 | orchestrator | 2025-02-10 09:33:54.195800 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-10 09:33:54.195814 | orchestrator | Monday 10 February 2025 09:31:49 +0000 (0:00:01.589) 0:00:16.687 ******* 2025-02-10 09:33:54.195828 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.195842 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.195857 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.195871 | orchestrator | 2025-02-10 09:33:54.195885 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-10 09:33:54.195900 | orchestrator | Monday 10 February 2025 09:31:49 +0000 (0:00:00.336) 0:00:17.023 ******* 2025-02-10 09:33:54.195914 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.195929 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.195943 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.195990 | orchestrator | 2025-02-10 09:33:54.196007 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:33:54.196021 | orchestrator | Monday 10 February 2025 09:31:50 +0000 (0:00:00.629) 0:00:17.653 ******* 2025-02-10 09:33:54.196035 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196057 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196071 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196085 | orchestrator | 2025-02-10 09:33:54.196099 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-10 09:33:54.196113 | orchestrator | Monday 10 February 2025 09:31:50 +0000 (0:00:00.428) 0:00:18.082 ******* 2025-02-10 09:33:54.196127 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.196141 | orchestrator | 2025-02-10 09:33:54.196161 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-10 09:33:54.196176 | orchestrator | Monday 10 February 2025 09:31:50 +0000 (0:00:00.165) 0:00:18.248 ******* 2025-02-10 09:33:54.196190 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196205 | orchestrator | 2025-02-10 09:33:54.196219 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:33:54.196233 | orchestrator | Monday 10 February 2025 09:31:51 +0000 (0:00:00.327) 0:00:18.576 ******* 2025-02-10 09:33:54.196247 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196262 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196276 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196289 | orchestrator | 2025-02-10 09:33:54.196304 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-10 09:33:54.196318 | orchestrator | Monday 10 February 2025 09:31:51 +0000 (0:00:00.696) 0:00:19.273 ******* 2025-02-10 09:33:54.196332 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196346 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196360 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196375 | orchestrator | 2025-02-10 09:33:54.196389 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-10 09:33:54.196403 | orchestrator | Monday 10 February 2025 09:31:52 +0000 (0:00:00.402) 0:00:19.675 ******* 2025-02-10 09:33:54.196417 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196432 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196447 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196461 | orchestrator | 2025-02-10 09:33:54.196476 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-10 09:33:54.196490 | orchestrator | Monday 10 February 2025 09:31:52 +0000 (0:00:00.418) 0:00:20.094 ******* 2025-02-10 09:33:54.196504 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196518 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196539 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196555 | orchestrator | 2025-02-10 09:33:54.196569 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-10 09:33:54.196584 | orchestrator | Monday 10 February 2025 09:31:53 +0000 (0:00:00.441) 0:00:20.535 ******* 2025-02-10 09:33:54.196598 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196612 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196627 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196641 | orchestrator | 2025-02-10 09:33:54.196655 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-10 09:33:54.196670 | orchestrator | Monday 10 February 2025 09:31:53 +0000 (0:00:00.718) 0:00:21.254 ******* 2025-02-10 09:33:54.196684 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196698 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196713 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196729 | orchestrator | 2025-02-10 09:33:54.196743 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-10 09:33:54.196758 | orchestrator | Monday 10 February 2025 09:31:54 +0000 (0:00:00.434) 0:00:21.688 ******* 2025-02-10 09:33:54.196772 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.196786 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.196801 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.196815 | orchestrator | 2025-02-10 09:33:54.196829 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-10 09:33:54.196852 | orchestrator | Monday 10 February 2025 09:31:54 +0000 (0:00:00.464) 0:00:22.153 ******* 2025-02-10 09:33:54.196867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f024456c--4135--5029--bf0e--13fb105dc5b7-osd--block--f024456c--4135--5029--bf0e--13fb105dc5b7', 'dm-uuid-LVM-h3ypNuwZWj2S4djDOMdryAWIRBQEd03bLxlUATcvF5FxMKzE3Dd5KLuNVfghAhL4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.196885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3ebd317--95a0--5383--a134--14be01baa44d-osd--block--a3ebd317--95a0--5383--a134--14be01baa44d', 'dm-uuid-LVM-yrWaaOsW8g6wWkHwEDVP4bp11l3u7ccCF1PsELKWIgHYSrpkSyx99K1uIWL1F0Kl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.196899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.196915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.196932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.196947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8f95f397--c0f5--5bc9--9af0--9f577faebed9-osd--block--8f95f397--c0f5--5bc9--9af0--9f577faebed9', 'dm-uuid-LVM-uhI5MWJlMX7QVsgsSfRBdnnDS5EhplIv6LUEclEn4dSHXjMet8gvcOzpJUZXzPv7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--204ceda1--8353--534a--a397--2ce8fe516c0b-osd--block--204ceda1--8353--534a--a397--2ce8fe516c0b', 'dm-uuid-LVM-DBV0ZXNf5Rux7ZKFvL0W1kv5R7eU8F8uRvmLcQfYR9yeelyfE3JT5St3LjN1vmn1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e6955b6-ceeb-4871-99fa-6f4d00721e84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f024456c--4135--5029--bf0e--13fb105dc5b7-osd--block--f024456c--4135--5029--bf0e--13fb105dc5b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-98caFU-c1oV-q0at-uThP-j5GP-8Amf-KtSTM5', 'scsi-0QEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598', 'scsi-SQEMU_QEMU_HARDDISK_2f4b37ab-ea48-4e89-a573-74f28832e598'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a3ebd317--95a0--5383--a134--14be01baa44d-osd--block--a3ebd317--95a0--5383--a134--14be01baa44d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ubigmV-AdE2-nFuE-5Jj2-kBId-NpGc-T88bcC', 'scsi-0QEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a', 'scsi-SQEMU_QEMU_HARDDISK_5f0d01b9-0e02-4dee-9565-cff6803c305a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51', 'scsi-SQEMU_QEMU_HARDDISK_b9150377-bf23-4053-9d8b-4b6b16705e51'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part1', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part14', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part15', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part16', 'scsi-SQEMU_QEMU_HARDDISK_d47abd3b-400c-4af9-8fd5-b0027775d899-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197400 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.197415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8f95f397--c0f5--5bc9--9af0--9f577faebed9-osd--block--8f95f397--c0f5--5bc9--9af0--9f577faebed9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bqvcn8-mVqm-BLr0-ANFq-gzac-dC5g-Mq8mV7', 'scsi-0QEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f', 'scsi-SQEMU_QEMU_HARDDISK_b66e53a8-0538-4d41-8a28-7ec132d4688f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--204ceda1--8353--534a--a397--2ce8fe516c0b-osd--block--204ceda1--8353--534a--a397--2ce8fe516c0b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qC5aBM-meSg-TaTe-C4KK-rMdi-fEdd-SbdWJP', 'scsi-0QEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334', 'scsi-SQEMU_QEMU_HARDDISK_a5ae359e-12ae-4197-8eef-3ae34f8c1334'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c468f1bf--17d5--510b--8602--ed8efc51f14c-osd--block--c468f1bf--17d5--510b--8602--ed8efc51f14c', 'dm-uuid-LVM-a8C2gnTgwcOwFPJA2mm9UewWaXbvd0CLiixcWuVbeZpi0dDnE05g7vE0nBiDyAJ8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c', 'scsi-SQEMU_QEMU_HARDDISK_2438f8bd-e1da-4f87-b9a4-97b4ac996f9c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b75c92e--4993--5ff3--a16a--a182a58c3e6b-osd--block--9b75c92e--4993--5ff3--a16a--a182a58c3e6b', 'dm-uuid-LVM-RQc7qDSCkwgL9Ynbo467106NyuNKxjkVxZiXie2vTtw4eqcbamkRGKXeBnIB4fIN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197544 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.197559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:33:54.197678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part1', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part14', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part15', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part16', 'scsi-SQEMU_QEMU_HARDDISK_2afb4105-92b8-4f06-8361-8ae3b6c04642-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c468f1bf--17d5--510b--8602--ed8efc51f14c-osd--block--c468f1bf--17d5--510b--8602--ed8efc51f14c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eevd5D-qOUd-EQFp-X6R4-ym4s-XMFN-shAQIW', 'scsi-0QEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06', 'scsi-SQEMU_QEMU_HARDDISK_f26c39ad-11ff-4bfe-ad92-01d3e6216f06'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b75c92e--4993--5ff3--a16a--a182a58c3e6b-osd--block--9b75c92e--4993--5ff3--a16a--a182a58c3e6b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yleaN1-8e0F-mdX7-rSzw-asqN-R9lE-Re8mng', 'scsi-0QEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a', 'scsi-SQEMU_QEMU_HARDDISK_8c6e9329-7e35-46a5-ba0c-0fddbc56ea2a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92', 'scsi-SQEMU_QEMU_HARDDISK_30eee918-495f-46ac-9f20-7bf018cd9f92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:33:54.197775 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.197789 | orchestrator | 2025-02-10 09:33:54.197804 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-10 09:33:54.197819 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:00.787) 0:00:22.940 ******* 2025-02-10 09:33:54.197835 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-10 09:33:54.197851 | orchestrator | 2025-02-10 09:33:54.197866 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-10 09:33:54.197881 | orchestrator | Monday 10 February 2025 09:31:56 +0000 (0:00:01.558) 0:00:24.499 ******* 2025-02-10 09:33:54.197895 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.197910 | orchestrator | 2025-02-10 09:33:54.197925 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-10 09:33:54.197940 | orchestrator | Monday 10 February 2025 09:31:57 +0000 (0:00:00.188) 0:00:24.687 ******* 2025-02-10 09:33:54.197989 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.198008 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.198060 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.198075 | orchestrator | 2025-02-10 09:33:54.198090 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-10 09:33:54.198105 | orchestrator | Monday 10 February 2025 09:31:57 +0000 (0:00:00.515) 0:00:25.203 ******* 2025-02-10 09:33:54.198119 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.198133 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.198147 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.198161 | orchestrator | 2025-02-10 09:33:54.198176 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-10 09:33:54.198190 | orchestrator | Monday 10 February 2025 09:31:58 +0000 (0:00:00.753) 0:00:25.957 ******* 2025-02-10 09:33:54.198205 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.198219 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.198233 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.198247 | orchestrator | 2025-02-10 09:33:54.198262 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:33:54.198277 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:00.567) 0:00:26.524 ******* 2025-02-10 09:33:54.198291 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.198305 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.198320 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.198335 | orchestrator | 2025-02-10 09:33:54.198350 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:33:54.198365 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:00.691) 0:00:27.215 ******* 2025-02-10 09:33:54.198380 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.198397 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.198422 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.198437 | orchestrator | 2025-02-10 09:33:54.198457 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:33:54.198473 | orchestrator | Monday 10 February 2025 09:32:00 +0000 (0:00:00.360) 0:00:27.576 ******* 2025-02-10 09:33:54.198488 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.198502 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.198516 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.198530 | orchestrator | 2025-02-10 09:33:54.198544 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:33:54.198558 | orchestrator | Monday 10 February 2025 09:32:00 +0000 (0:00:00.658) 0:00:28.234 ******* 2025-02-10 09:33:54.198572 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.198586 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.198601 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.198615 | orchestrator | 2025-02-10 09:33:54.198629 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-10 09:33:54.198642 | orchestrator | Monday 10 February 2025 09:32:01 +0000 (0:00:00.611) 0:00:28.845 ******* 2025-02-10 09:33:54.198657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:54.198671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:54.198685 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:54.198699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:54.198713 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.198727 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:54.198741 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:54.198755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:54.198769 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.198783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:54.198798 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:54.198813 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.198827 | orchestrator | 2025-02-10 09:33:54.198841 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-10 09:33:54.198863 | orchestrator | Monday 10 February 2025 09:32:02 +0000 (0:00:01.356) 0:00:30.201 ******* 2025-02-10 09:33:54.198878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:54.198892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:54.198906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:54.198919 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:54.198934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:54.198947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:54.199026 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.199045 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:54.199061 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.199075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:54.199089 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:54.199103 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.199118 | orchestrator | 2025-02-10 09:33:54.199132 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-10 09:33:54.199147 | orchestrator | Monday 10 February 2025 09:32:03 +0000 (0:00:01.121) 0:00:31.323 ******* 2025-02-10 09:33:54.199161 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-10 09:33:54.199175 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-10 09:33:54.199189 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-10 09:33:54.199212 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-10 09:33:54.199226 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-10 09:33:54.199246 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-10 09:33:54.199260 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-10 09:33:54.199274 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-10 09:33:54.199288 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-10 09:33:54.199301 | orchestrator | 2025-02-10 09:33:54.199316 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-10 09:33:54.199330 | orchestrator | Monday 10 February 2025 09:32:07 +0000 (0:00:03.271) 0:00:34.595 ******* 2025-02-10 09:33:54.199344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:54.199358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:54.199373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:54.199387 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:54.199401 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:54.199415 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:54.199429 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.199443 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.199457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:54.199472 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:54.199486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:54.199500 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.199514 | orchestrator | 2025-02-10 09:33:54.199529 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-10 09:33:54.199543 | orchestrator | Monday 10 February 2025 09:32:07 +0000 (0:00:00.581) 0:00:35.176 ******* 2025-02-10 09:33:54.199556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-10 09:33:54.199571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-10 09:33:54.199586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-10 09:33:54.199609 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-10 09:33:54.199623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-10 09:33:54.199637 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.199651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-10 09:33:54.199665 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.199679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-10 09:33:54.199693 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-10 09:33:54.199707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-10 09:33:54.199721 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.199735 | orchestrator | 2025-02-10 09:33:54.199749 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-10 09:33:54.199763 | orchestrator | Monday 10 February 2025 09:32:08 +0000 (0:00:00.600) 0:00:35.777 ******* 2025-02-10 09:33:54.199778 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:54.199793 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:54.199808 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:54.199823 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.199838 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:54.199853 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:54.199868 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:54.199891 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.199906 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-10 09:33:54.199928 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:33:54.199943 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:33:54.199983 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.199999 | orchestrator | 2025-02-10 09:33:54.200013 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-10 09:33:54.200027 | orchestrator | Monday 10 February 2025 09:32:08 +0000 (0:00:00.429) 0:00:36.206 ******* 2025-02-10 09:33:54.200041 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:33:54.200055 | orchestrator | 2025-02-10 09:33:54.200069 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-10 09:33:54.200083 | orchestrator | Monday 10 February 2025 09:32:09 +0000 (0:00:00.908) 0:00:37.115 ******* 2025-02-10 09:33:54.200098 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200112 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.200126 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.200140 | orchestrator | 2025-02-10 09:33:54.200154 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-10 09:33:54.200168 | orchestrator | Monday 10 February 2025 09:32:10 +0000 (0:00:00.401) 0:00:37.517 ******* 2025-02-10 09:33:54.200182 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200196 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.200210 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.200224 | orchestrator | 2025-02-10 09:33:54.200238 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-10 09:33:54.200252 | orchestrator | Monday 10 February 2025 09:32:10 +0000 (0:00:00.427) 0:00:37.944 ******* 2025-02-10 09:33:54.200266 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200280 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.200295 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.200309 | orchestrator | 2025-02-10 09:33:54.200324 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-10 09:33:54.200338 | orchestrator | Monday 10 February 2025 09:32:10 +0000 (0:00:00.470) 0:00:38.415 ******* 2025-02-10 09:33:54.200352 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.200367 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.200380 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.200395 | orchestrator | 2025-02-10 09:33:54.200409 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-10 09:33:54.200423 | orchestrator | Monday 10 February 2025 09:32:11 +0000 (0:00:00.484) 0:00:38.900 ******* 2025-02-10 09:33:54.200437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:54.200451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:54.200465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:54.200479 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200494 | orchestrator | 2025-02-10 09:33:54.200508 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-10 09:33:54.200522 | orchestrator | Monday 10 February 2025 09:32:11 +0000 (0:00:00.411) 0:00:39.312 ******* 2025-02-10 09:33:54.200541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:54.200556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:54.200570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:54.200584 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200598 | orchestrator | 2025-02-10 09:33:54.200612 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-10 09:33:54.200633 | orchestrator | Monday 10 February 2025 09:32:12 +0000 (0:00:00.359) 0:00:39.672 ******* 2025-02-10 09:33:54.200648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:54.200662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:54.200676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:54.200691 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200705 | orchestrator | 2025-02-10 09:33:54.200719 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:54.200740 | orchestrator | Monday 10 February 2025 09:32:12 +0000 (0:00:00.470) 0:00:40.142 ******* 2025-02-10 09:33:54.200754 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:33:54.200769 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:33:54.200783 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:33:54.200797 | orchestrator | 2025-02-10 09:33:54.200811 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-10 09:33:54.200826 | orchestrator | Monday 10 February 2025 09:32:12 +0000 (0:00:00.338) 0:00:40.480 ******* 2025-02-10 09:33:54.200840 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-10 09:33:54.200854 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-10 09:33:54.200868 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-10 09:33:54.200882 | orchestrator | 2025-02-10 09:33:54.200897 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-10 09:33:54.200912 | orchestrator | Monday 10 February 2025 09:32:13 +0000 (0:00:00.962) 0:00:41.443 ******* 2025-02-10 09:33:54.200926 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.200940 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.200981 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.200997 | orchestrator | 2025-02-10 09:33:54.201011 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-10 09:33:54.201025 | orchestrator | Monday 10 February 2025 09:32:14 +0000 (0:00:00.372) 0:00:41.816 ******* 2025-02-10 09:33:54.201039 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.201053 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.201067 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.201088 | orchestrator | 2025-02-10 09:33:54.201103 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-10 09:33:54.201125 | orchestrator | Monday 10 February 2025 09:32:14 +0000 (0:00:00.389) 0:00:42.205 ******* 2025-02-10 09:33:54.201141 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-10 09:33:54.201155 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.201169 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-10 09:33:54.201184 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.201197 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-10 09:33:54.201212 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.201225 | orchestrator | 2025-02-10 09:33:54.201239 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-10 09:33:54.201253 | orchestrator | Monday 10 February 2025 09:32:15 +0000 (0:00:00.520) 0:00:42.726 ******* 2025-02-10 09:33:54.201267 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:54.201281 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.201296 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:54.201310 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.201324 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-10 09:33:54.201339 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.201353 | orchestrator | 2025-02-10 09:33:54.201367 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-10 09:33:54.201388 | orchestrator | Monday 10 February 2025 09:32:15 +0000 (0:00:00.648) 0:00:43.374 ******* 2025-02-10 09:33:54.201403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-10 09:33:54.201417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-10 09:33:54.201432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-10 09:33:54.201446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-10 09:33:54.201460 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.201475 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-10 09:33:54.201489 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-10 09:33:54.201502 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-10 09:33:54.201516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-10 09:33:54.201530 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.201544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-10 09:33:54.201558 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.201572 | orchestrator | 2025-02-10 09:33:54.201587 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-10 09:33:54.201601 | orchestrator | Monday 10 February 2025 09:32:16 +0000 (0:00:00.872) 0:00:44.246 ******* 2025-02-10 09:33:54.201615 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.201629 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.201643 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:33:54.201657 | orchestrator | 2025-02-10 09:33:54.201671 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-10 09:33:54.201686 | orchestrator | Monday 10 February 2025 09:32:17 +0000 (0:00:00.363) 0:00:44.610 ******* 2025-02-10 09:33:54.201700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:33:54.201714 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:54.201728 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:54.201742 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-10 09:33:54.201756 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:33:54.201770 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:33:54.201784 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:33:54.201799 | orchestrator | 2025-02-10 09:33:54.201813 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-10 09:33:54.201827 | orchestrator | Monday 10 February 2025 09:32:18 +0000 (0:00:01.222) 0:00:45.832 ******* 2025-02-10 09:33:54.201842 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-10 09:33:54.201856 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:33:54.201870 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:33:54.201884 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-10 09:33:54.201898 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:33:54.201912 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:33:54.201926 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:33:54.201941 | orchestrator | 2025-02-10 09:33:54.201985 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-02-10 09:33:54.202000 | orchestrator | Monday 10 February 2025 09:32:20 +0000 (0:00:02.433) 0:00:48.266 ******* 2025-02-10 09:33:54.202066 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:33:54.202086 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:33:54.202108 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-02-10 09:33:54.202123 | orchestrator | 2025-02-10 09:33:54.202137 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-02-10 09:33:54.202159 | orchestrator | Monday 10 February 2025 09:32:21 +0000 (0:00:00.655) 0:00:48.921 ******* 2025-02-10 09:33:54.202175 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:54.202192 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:54.202207 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:54.202222 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:54.202236 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-10 09:33:54.202251 | orchestrator | 2025-02-10 09:33:54.202265 | orchestrator | TASK [generate keys] *********************************************************** 2025-02-10 09:33:54.202279 | orchestrator | Monday 10 February 2025 09:33:00 +0000 (0:00:39.248) 0:01:28.170 ******* 2025-02-10 09:33:54.202293 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202307 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202321 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202335 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202349 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202364 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202377 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-02-10 09:33:54.202391 | orchestrator | 2025-02-10 09:33:54.202405 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-02-10 09:33:54.202419 | orchestrator | Monday 10 February 2025 09:33:20 +0000 (0:00:20.106) 0:01:48.276 ******* 2025-02-10 09:33:54.202433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202447 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202461 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202475 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202488 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202502 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202516 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-10 09:33:54.202530 | orchestrator | 2025-02-10 09:33:54.202551 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-02-10 09:33:54.202565 | orchestrator | Monday 10 February 2025 09:33:31 +0000 (0:00:10.470) 0:01:58.747 ******* 2025-02-10 09:33:54.202579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202593 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:33:54.202606 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:33:54.202620 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202634 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:33:54.202648 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:33:54.202662 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202676 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:33:54.202690 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:33:54.202704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:54.202723 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:33:54.202743 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:33:57.236767 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:57.236889 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:33:57.236901 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:33:57.236910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-10 09:33:57.236918 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-10 09:33:57.236927 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-10 09:33:57.236936 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-02-10 09:33:57.236944 | orchestrator | 2025-02-10 09:33:57.236986 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:33:57.236999 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-10 09:33:57.237009 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-10 09:33:57.237019 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-10 09:33:57.237032 | orchestrator | 2025-02-10 09:33:57.237045 | orchestrator | 2025-02-10 09:33:57.237057 | orchestrator | 2025-02-10 09:33:57.237070 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:33:57.237083 | orchestrator | Monday 10 February 2025 09:33:50 +0000 (0:00:19.611) 0:02:18.358 ******* 2025-02-10 09:33:57.237095 | orchestrator | =============================================================================== 2025-02-10 09:33:57.237109 | orchestrator | create openstack pool(s) ----------------------------------------------- 39.25s 2025-02-10 09:33:57.237122 | orchestrator | generate keys ---------------------------------------------------------- 20.11s 2025-02-10 09:33:57.237134 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.61s 2025-02-10 09:33:57.237142 | orchestrator | get keys from monitors ------------------------------------------------- 10.47s 2025-02-10 09:33:57.237150 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 3.27s 2025-02-10 09:33:57.237158 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.43s 2025-02-10 09:33:57.237195 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.39s 2025-02-10 09:33:57.237204 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.59s 2025-02-10 09:33:57.237212 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.56s 2025-02-10 09:33:57.237220 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.36s 2025-02-10 09:33:57.237228 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.22s 2025-02-10 09:33:57.237236 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 1.13s 2025-02-10 09:33:57.237244 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 1.12s 2025-02-10 09:33:57.237252 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.99s 2025-02-10 09:33:57.237260 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.99s 2025-02-10 09:33:57.237268 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.96s 2025-02-10 09:33:57.237276 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.91s 2025-02-10 09:33:57.237284 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.87s 2025-02-10 09:33:57.237292 | orchestrator | ceph-facts : set_fact container_exec_cmd -------------------------------- 0.85s 2025-02-10 09:33:57.237301 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.79s 2025-02-10 09:33:57.237311 | orchestrator | 2025-02-10 09:33:54 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:33:57.237320 | orchestrator | 2025-02-10 09:33:54 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:33:57.237329 | orchestrator | 2025-02-10 09:33:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:33:57.237370 | orchestrator | 2025-02-10 09:33:57 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:33:57.238570 | orchestrator | 2025-02-10 09:33:57 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:34:00.284351 | orchestrator | 2025-02-10 09:33:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:00.284509 | orchestrator | 2025-02-10 09:34:00 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:00.284718 | orchestrator | 2025-02-10 09:34:00 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:34:00.284928 | orchestrator | 2025-02-10 09:34:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:03.331853 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:03.333520 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:03.335616 | orchestrator | 2025-02-10 09:34:03 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:34:03.335893 | orchestrator | 2025-02-10 09:34:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:06.386628 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:06.386868 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:06.389251 | orchestrator | 2025-02-10 09:34:06 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:34:09.439509 | orchestrator | 2025-02-10 09:34:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:09.439683 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:09.440216 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:09.440264 | orchestrator | 2025-02-10 09:34:09 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:34:12.485699 | orchestrator | 2025-02-10 09:34:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:12.485955 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:12.487398 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:12.487532 | orchestrator | 2025-02-10 09:34:12 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state STARTED 2025-02-10 09:34:15.558670 | orchestrator | 2025-02-10 09:34:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:15.558837 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:15.559068 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:15.559102 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:15.559828 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:15.560465 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:15.562123 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task 441b9e90-0d2d-4a5d-83f2-7a2213dad0e8 is in state SUCCESS 2025-02-10 09:34:15.563940 | orchestrator | 2025-02-10 09:34:15.564019 | orchestrator | 2025-02-10 09:34:15.564036 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:34:15.564051 | orchestrator | 2025-02-10 09:34:15.564400 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:34:15.564426 | orchestrator | Monday 10 February 2025 09:31:25 +0000 (0:00:00.460) 0:00:00.460 ******* 2025-02-10 09:34:15.564441 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.564456 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:34:15.564471 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:34:15.564485 | orchestrator | 2025-02-10 09:34:15.564499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:34:15.564513 | orchestrator | Monday 10 February 2025 09:31:26 +0000 (0:00:00.701) 0:00:01.162 ******* 2025-02-10 09:34:15.564527 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-10 09:34:15.564541 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-10 09:34:15.564555 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-10 09:34:15.564569 | orchestrator | 2025-02-10 09:34:15.564583 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-02-10 09:34:15.564597 | orchestrator | 2025-02-10 09:34:15.564611 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:34:15.564625 | orchestrator | Monday 10 February 2025 09:31:27 +0000 (0:00:00.685) 0:00:01.847 ******* 2025-02-10 09:34:15.564640 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:34:15.564656 | orchestrator | 2025-02-10 09:34:15.564670 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-02-10 09:34:15.564684 | orchestrator | Monday 10 February 2025 09:31:28 +0000 (0:00:00.926) 0:00:02.773 ******* 2025-02-10 09:34:15.564702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.564767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.564847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.564868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.564884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.564909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.564932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.564948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.564993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565010 | orchestrator | 2025-02-10 09:34:15.565026 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-02-10 09:34:15.565043 | orchestrator | Monday 10 February 2025 09:31:30 +0000 (0:00:02.235) 0:00:05.008 ******* 2025-02-10 09:34:15.565065 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-02-10 09:34:15.565081 | orchestrator | 2025-02-10 09:34:15.565101 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-02-10 09:34:15.565117 | orchestrator | Monday 10 February 2025 09:31:31 +0000 (0:00:00.645) 0:00:05.654 ******* 2025-02-10 09:34:15.565132 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.565148 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:34:15.565164 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:34:15.565179 | orchestrator | 2025-02-10 09:34:15.565195 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-02-10 09:34:15.565210 | orchestrator | Monday 10 February 2025 09:31:31 +0000 (0:00:00.473) 0:00:06.127 ******* 2025-02-10 09:34:15.565226 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:34:15.565243 | orchestrator | 2025-02-10 09:34:15.565259 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:34:15.565274 | orchestrator | Monday 10 February 2025 09:31:31 +0000 (0:00:00.451) 0:00:06.579 ******* 2025-02-10 09:34:15.565304 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:34:15.565319 | orchestrator | 2025-02-10 09:34:15.565335 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-02-10 09:34:15.565351 | orchestrator | Monday 10 February 2025 09:31:32 +0000 (0:00:00.856) 0:00:07.435 ******* 2025-02-10 09:34:15.565367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.565384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.565406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.565431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.565533 | orchestrator | 2025-02-10 09:34:15.565547 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-02-10 09:34:15.565561 | orchestrator | Monday 10 February 2025 09:31:36 +0000 (0:00:03.358) 0:00:10.794 ******* 2025-02-10 09:34:15.565585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:34:15.565607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.565622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:34:15.565636 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.565656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:34:15.565671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.565698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:34:15.565720 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.565735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:34:15.565750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.565765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:34:15.565779 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.565794 | orchestrator | 2025-02-10 09:34:15.565808 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-02-10 09:34:15.565822 | orchestrator | Monday 10 February 2025 09:31:37 +0000 (0:00:00.861) 0:00:11.655 ******* 2025-02-10 09:34:15.565836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:34:15.565859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.565888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:34:15.565904 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.565918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:34:15.565996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.566012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:34:15.566105 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.566131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-10 09:34:15.566155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.566171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-10 09:34:15.566185 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.566200 | orchestrator | 2025-02-10 09:34:15.566214 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-02-10 09:34:15.566228 | orchestrator | Monday 10 February 2025 09:31:38 +0000 (0:00:01.211) 0:00:12.867 ******* 2025-02-10 09:34:15.566270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.566287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.566318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.566334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566448 | orchestrator | 2025-02-10 09:34:15.566463 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-02-10 09:34:15.566478 | orchestrator | Monday 10 February 2025 09:31:42 +0000 (0:00:03.819) 0:00:16.686 ******* 2025-02-10 09:34:15.566493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.566519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.566535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.566550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.566581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.566597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.566622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.566675 | orchestrator | 2025-02-10 09:34:15.566690 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-02-10 09:34:15.566704 | orchestrator | Monday 10 February 2025 09:31:49 +0000 (0:00:07.638) 0:00:24.327 ******* 2025-02-10 09:34:15.566719 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.566733 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:34:15.566747 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:34:15.566761 | orchestrator | 2025-02-10 09:34:15.566775 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-02-10 09:34:15.566789 | orchestrator | Monday 10 February 2025 09:31:53 +0000 (0:00:03.531) 0:00:27.859 ******* 2025-02-10 09:34:15.566803 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.566818 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.566832 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.566846 | orchestrator | 2025-02-10 09:34:15.566860 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-02-10 09:34:15.566875 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:01.860) 0:00:29.720 ******* 2025-02-10 09:34:15.566889 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.566902 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.566917 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.566931 | orchestrator | 2025-02-10 09:34:15.566945 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-02-10 09:34:15.567014 | orchestrator | Monday 10 February 2025 09:31:55 +0000 (0:00:00.538) 0:00:30.259 ******* 2025-02-10 09:34:15.567032 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.567047 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.567068 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.567082 | orchestrator | 2025-02-10 09:34:15.567096 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-02-10 09:34:15.567110 | orchestrator | Monday 10 February 2025 09:31:56 +0000 (0:00:00.573) 0:00:30.832 ******* 2025-02-10 09:34:15.567125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.567140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.567168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.567193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.567216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.567231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-10 09:34:15.567246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.567260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.567295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.567310 | orchestrator | 2025-02-10 09:34:15.567325 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:34:15.567339 | orchestrator | Monday 10 February 2025 09:31:59 +0000 (0:00:03.599) 0:00:34.432 ******* 2025-02-10 09:34:15.567353 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.567367 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.567381 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.567395 | orchestrator | 2025-02-10 09:34:15.567409 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-02-10 09:34:15.567423 | orchestrator | Monday 10 February 2025 09:32:00 +0000 (0:00:00.318) 0:00:34.750 ******* 2025-02-10 09:34:15.567437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-10 09:34:15.567452 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-10 09:34:15.567466 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-10 09:34:15.567480 | orchestrator | 2025-02-10 09:34:15.567494 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-02-10 09:34:15.567508 | orchestrator | Monday 10 February 2025 09:32:04 +0000 (0:00:03.873) 0:00:38.624 ******* 2025-02-10 09:34:15.567522 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:34:15.567537 | orchestrator | 2025-02-10 09:34:15.567549 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-02-10 09:34:15.567562 | orchestrator | Monday 10 February 2025 09:32:05 +0000 (0:00:01.756) 0:00:40.381 ******* 2025-02-10 09:34:15.567574 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.567587 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.567599 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.567611 | orchestrator | 2025-02-10 09:34:15.567629 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-02-10 09:34:15.567642 | orchestrator | Monday 10 February 2025 09:32:07 +0000 (0:00:01.957) 0:00:42.338 ******* 2025-02-10 09:34:15.567655 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:34:15.567667 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:34:15.567680 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:34:15.567692 | orchestrator | 2025-02-10 09:34:15.567705 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-02-10 09:34:15.567717 | orchestrator | Monday 10 February 2025 09:32:09 +0000 (0:00:01.494) 0:00:43.833 ******* 2025-02-10 09:34:15.567729 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.567742 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:34:15.567764 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:34:15.567777 | orchestrator | 2025-02-10 09:34:15.567789 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-02-10 09:34:15.567802 | orchestrator | Monday 10 February 2025 09:32:09 +0000 (0:00:00.719) 0:00:44.552 ******* 2025-02-10 09:34:15.567814 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-10 09:34:15.567833 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-10 09:34:15.567845 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-10 09:34:15.567858 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-10 09:34:15.567875 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-10 09:34:15.567888 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-10 09:34:15.567901 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-10 09:34:15.567913 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-10 09:34:15.567926 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-10 09:34:15.567938 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-10 09:34:15.567950 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-10 09:34:15.567977 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-10 09:34:15.567991 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-10 09:34:15.568003 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-10 09:34:15.568016 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-10 09:34:15.568028 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:34:15.568040 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:34:15.568053 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:34:15.568065 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:34:15.568078 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:34:15.568091 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:34:15.568103 | orchestrator | 2025-02-10 09:34:15.568115 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-02-10 09:34:15.568128 | orchestrator | Monday 10 February 2025 09:32:23 +0000 (0:00:13.239) 0:00:57.791 ******* 2025-02-10 09:34:15.568140 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:34:15.568152 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:34:15.568165 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:34:15.568183 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:34:15.568195 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:34:15.568208 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:34:15.568220 | orchestrator | 2025-02-10 09:34:15.568233 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-02-10 09:34:15.568246 | orchestrator | Monday 10 February 2025 09:32:26 +0000 (0:00:03.626) 0:01:01.418 ******* 2025-02-10 09:34:15.568266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.568287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.568300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-10 09:34:15.568325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.568340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.568365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-10 09:34:15.568379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.568392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.568405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-10 09:34:15.568417 | orchestrator | 2025-02-10 09:34:15.568431 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:34:15.568443 | orchestrator | Monday 10 February 2025 09:32:29 +0000 (0:00:02.775) 0:01:04.193 ******* 2025-02-10 09:34:15.568456 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.568468 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.568481 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.568493 | orchestrator | 2025-02-10 09:34:15.568506 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-02-10 09:34:15.568518 | orchestrator | Monday 10 February 2025 09:32:30 +0000 (0:00:00.490) 0:01:04.684 ******* 2025-02-10 09:34:15.568531 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.568543 | orchestrator | 2025-02-10 09:34:15.568555 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-02-10 09:34:15.568568 | orchestrator | Monday 10 February 2025 09:32:32 +0000 (0:00:02.507) 0:01:07.191 ******* 2025-02-10 09:34:15.568580 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.568592 | orchestrator | 2025-02-10 09:34:15.568605 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-02-10 09:34:15.568617 | orchestrator | Monday 10 February 2025 09:32:35 +0000 (0:00:02.447) 0:01:09.639 ******* 2025-02-10 09:34:15.568630 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.568648 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:34:15.568661 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:34:15.568673 | orchestrator | 2025-02-10 09:34:15.568685 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-02-10 09:34:15.568698 | orchestrator | Monday 10 February 2025 09:32:37 +0000 (0:00:02.483) 0:01:12.123 ******* 2025-02-10 09:34:15.568710 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.568722 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:34:15.568735 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:34:15.568747 | orchestrator | 2025-02-10 09:34:15.568760 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-02-10 09:34:15.568773 | orchestrator | Monday 10 February 2025 09:32:38 +0000 (0:00:00.750) 0:01:12.874 ******* 2025-02-10 09:34:15.568785 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.568798 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:15.568810 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:15.568823 | orchestrator | 2025-02-10 09:34:15.568835 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-02-10 09:34:15.568847 | orchestrator | Monday 10 February 2025 09:32:39 +0000 (0:00:00.899) 0:01:13.774 ******* 2025-02-10 09:34:15.568860 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.568872 | orchestrator | 2025-02-10 09:34:15.568885 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-02-10 09:34:15.568902 | orchestrator | Monday 10 February 2025 09:32:52 +0000 (0:00:13.147) 0:01:26.922 ******* 2025-02-10 09:34:15.568915 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.568927 | orchestrator | 2025-02-10 09:34:15.568940 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-10 09:34:15.568953 | orchestrator | Monday 10 February 2025 09:33:00 +0000 (0:00:08.596) 0:01:35.518 ******* 2025-02-10 09:34:15.569008 | orchestrator | 2025-02-10 09:34:15.569022 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-10 09:34:15.569035 | orchestrator | Monday 10 February 2025 09:33:00 +0000 (0:00:00.067) 0:01:35.586 ******* 2025-02-10 09:34:15.569047 | orchestrator | 2025-02-10 09:34:15.569060 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-10 09:34:15.569072 | orchestrator | Monday 10 February 2025 09:33:01 +0000 (0:00:00.061) 0:01:35.648 ******* 2025-02-10 09:34:15.569085 | orchestrator | 2025-02-10 09:34:15.569095 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-02-10 09:34:15.569105 | orchestrator | Monday 10 February 2025 09:33:01 +0000 (0:00:00.064) 0:01:35.713 ******* 2025-02-10 09:34:15.569115 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.569126 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:34:15.569136 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:34:15.569146 | orchestrator | 2025-02-10 09:34:15.569156 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-02-10 09:34:15.569167 | orchestrator | Monday 10 February 2025 09:33:10 +0000 (0:00:09.575) 0:01:45.288 ******* 2025-02-10 09:34:15.569177 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.569187 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:34:15.569197 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:34:15.569207 | orchestrator | 2025-02-10 09:34:15.569217 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-02-10 09:34:15.569231 | orchestrator | Monday 10 February 2025 09:33:20 +0000 (0:00:09.410) 0:01:54.698 ******* 2025-02-10 09:34:15.569242 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.569252 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:34:15.569262 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:34:15.569272 | orchestrator | 2025-02-10 09:34:15.569283 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:34:15.569293 | orchestrator | Monday 10 February 2025 09:33:25 +0000 (0:00:05.539) 0:02:00.237 ******* 2025-02-10 09:34:15.569303 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:34:15.569319 | orchestrator | 2025-02-10 09:34:15.569329 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-02-10 09:34:15.569339 | orchestrator | Monday 10 February 2025 09:33:26 +0000 (0:00:00.730) 0:02:00.967 ******* 2025-02-10 09:34:15.569350 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.569360 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:34:15.569370 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:34:15.569380 | orchestrator | 2025-02-10 09:34:15.569390 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-02-10 09:34:15.569400 | orchestrator | Monday 10 February 2025 09:33:27 +0000 (0:00:00.989) 0:02:01.957 ******* 2025-02-10 09:34:15.569411 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:34:15.569421 | orchestrator | 2025-02-10 09:34:15.569431 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-02-10 09:34:15.569441 | orchestrator | Monday 10 February 2025 09:33:28 +0000 (0:00:01.584) 0:02:03.542 ******* 2025-02-10 09:34:15.569451 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-02-10 09:34:15.569461 | orchestrator | 2025-02-10 09:34:15.569471 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-02-10 09:34:15.569482 | orchestrator | Monday 10 February 2025 09:33:37 +0000 (0:00:08.909) 0:02:12.451 ******* 2025-02-10 09:34:15.569492 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-02-10 09:34:15.569502 | orchestrator | 2025-02-10 09:34:15.569512 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-02-10 09:34:15.569522 | orchestrator | Monday 10 February 2025 09:33:59 +0000 (0:00:22.160) 0:02:34.611 ******* 2025-02-10 09:34:15.569532 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-02-10 09:34:15.569543 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-02-10 09:34:15.569553 | orchestrator | 2025-02-10 09:34:15.569563 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-02-10 09:34:15.569573 | orchestrator | Monday 10 February 2025 09:34:07 +0000 (0:00:07.753) 0:02:42.365 ******* 2025-02-10 09:34:15.569583 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.569593 | orchestrator | 2025-02-10 09:34:15.569603 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-02-10 09:34:15.569613 | orchestrator | Monday 10 February 2025 09:34:07 +0000 (0:00:00.146) 0:02:42.512 ******* 2025-02-10 09:34:15.569623 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.569633 | orchestrator | 2025-02-10 09:34:15.569644 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-02-10 09:34:15.569654 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.150) 0:02:42.663 ******* 2025-02-10 09:34:15.569664 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.569674 | orchestrator | 2025-02-10 09:34:15.569684 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-02-10 09:34:15.569694 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.279) 0:02:42.942 ******* 2025-02-10 09:34:15.569704 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:15.569714 | orchestrator | 2025-02-10 09:34:15.569724 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-02-10 09:34:15.569734 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.514) 0:02:43.457 ******* 2025-02-10 09:34:15.569744 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:15.569754 | orchestrator | 2025-02-10 09:34:15.569764 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-10 09:34:15.569774 | orchestrator | Monday 10 February 2025 09:34:12 +0000 (0:00:03.863) 0:02:47.321 ******* 2025-02-10 09:34:15.569789 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:18.610667 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:34:18.610810 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:34:18.610829 | orchestrator | 2025-02-10 09:34:18.610881 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:34:18.610899 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:34:18.610916 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-10 09:34:18.610930 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-10 09:34:18.610945 | orchestrator | 2025-02-10 09:34:18.610959 | orchestrator | 2025-02-10 09:34:18.611064 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:34:18.611079 | orchestrator | Monday 10 February 2025 09:34:13 +0000 (0:00:00.598) 0:02:47.919 ******* 2025-02-10 09:34:18.611093 | orchestrator | =============================================================================== 2025-02-10 09:34:18.611107 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.16s 2025-02-10 09:34:18.611121 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 13.24s 2025-02-10 09:34:18.611135 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.15s 2025-02-10 09:34:18.611166 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.58s 2025-02-10 09:34:18.611181 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.41s 2025-02-10 09:34:18.611195 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.91s 2025-02-10 09:34:18.611212 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.60s 2025-02-10 09:34:18.611227 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.75s 2025-02-10 09:34:18.611243 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.64s 2025-02-10 09:34:18.611259 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.54s 2025-02-10 09:34:18.611275 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 3.87s 2025-02-10 09:34:18.611290 | orchestrator | keystone : Creating default user role ----------------------------------- 3.86s 2025-02-10 09:34:18.611305 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.82s 2025-02-10 09:34:18.611321 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.63s 2025-02-10 09:34:18.611337 | orchestrator | keystone : Copying over existing policy file ---------------------------- 3.60s 2025-02-10 09:34:18.611352 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 3.53s 2025-02-10 09:34:18.611367 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.36s 2025-02-10 09:34:18.611383 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.78s 2025-02-10 09:34:18.611398 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.51s 2025-02-10 09:34:18.611414 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 2.48s 2025-02-10 09:34:18.611430 | orchestrator | 2025-02-10 09:34:15 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:18.611447 | orchestrator | 2025-02-10 09:34:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:18.611482 | orchestrator | 2025-02-10 09:34:18 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:18.612665 | orchestrator | 2025-02-10 09:34:18 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:18.612695 | orchestrator | 2025-02-10 09:34:18 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:18.612712 | orchestrator | 2025-02-10 09:34:18 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:18.612742 | orchestrator | 2025-02-10 09:34:18 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:18.612765 | orchestrator | 2025-02-10 09:34:18 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:21.659569 | orchestrator | 2025-02-10 09:34:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:21.659751 | orchestrator | 2025-02-10 09:34:21 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:21.661659 | orchestrator | 2025-02-10 09:34:21 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:21.672015 | orchestrator | 2025-02-10 09:34:21 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:21.677134 | orchestrator | 2025-02-10 09:34:21 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:21.677210 | orchestrator | 2025-02-10 09:34:21 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:21.677250 | orchestrator | 2025-02-10 09:34:21 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:24.721263 | orchestrator | 2025-02-10 09:34:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:24.721428 | orchestrator | 2025-02-10 09:34:24 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:24.723290 | orchestrator | 2025-02-10 09:34:24 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:24.724525 | orchestrator | 2025-02-10 09:34:24 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:24.726743 | orchestrator | 2025-02-10 09:34:24 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:24.728759 | orchestrator | 2025-02-10 09:34:24 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:24.731154 | orchestrator | 2025-02-10 09:34:24 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:24.731447 | orchestrator | 2025-02-10 09:34:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:27.804442 | orchestrator | 2025-02-10 09:34:27 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:27.805024 | orchestrator | 2025-02-10 09:34:27 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:27.805090 | orchestrator | 2025-02-10 09:34:27 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:27.805757 | orchestrator | 2025-02-10 09:34:27 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:27.813926 | orchestrator | 2025-02-10 09:34:27 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:27.815315 | orchestrator | 2025-02-10 09:34:27 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:27.816595 | orchestrator | 2025-02-10 09:34:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:30.880208 | orchestrator | 2025-02-10 09:34:30 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:30.883198 | orchestrator | 2025-02-10 09:34:30 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:30.884572 | orchestrator | 2025-02-10 09:34:30 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:30.886548 | orchestrator | 2025-02-10 09:34:30 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:30.888454 | orchestrator | 2025-02-10 09:34:30 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:30.889883 | orchestrator | 2025-02-10 09:34:30 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:33.936411 | orchestrator | 2025-02-10 09:34:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:33.936629 | orchestrator | 2025-02-10 09:34:33 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:33.939248 | orchestrator | 2025-02-10 09:34:33 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:33.939325 | orchestrator | 2025-02-10 09:34:33 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:33.940858 | orchestrator | 2025-02-10 09:34:33 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:33.942152 | orchestrator | 2025-02-10 09:34:33 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:33.943309 | orchestrator | 2025-02-10 09:34:33 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:33.943458 | orchestrator | 2025-02-10 09:34:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:36.996268 | orchestrator | 2025-02-10 09:34:36 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:36.997831 | orchestrator | 2025-02-10 09:34:36 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state STARTED 2025-02-10 09:34:36.999577 | orchestrator | 2025-02-10 09:34:36 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:37.000964 | orchestrator | 2025-02-10 09:34:36 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:37.003420 | orchestrator | 2025-02-10 09:34:37 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:37.004808 | orchestrator | 2025-02-10 09:34:37 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:40.061302 | orchestrator | 2025-02-10 09:34:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:40.061470 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:40.063395 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task a6b2b51c-7e95-4a24-81b0-b19d4dd17ddb is in state SUCCESS 2025-02-10 09:34:40.063456 | orchestrator | 2025-02-10 09:34:40.063471 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:34:40.063482 | orchestrator | 2025-02-10 09:34:40.063494 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-02-10 09:34:40.063506 | orchestrator | 2025-02-10 09:34:40.063518 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-10 09:34:40.063530 | orchestrator | Monday 10 February 2025 09:34:05 +0000 (0:00:00.572) 0:00:00.572 ******* 2025-02-10 09:34:40.063541 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-02-10 09:34:40.063554 | orchestrator | 2025-02-10 09:34:40.063566 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-10 09:34:40.063577 | orchestrator | Monday 10 February 2025 09:34:05 +0000 (0:00:00.209) 0:00:00.782 ******* 2025-02-10 09:34:40.063589 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:34:40.063600 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:34:40.063612 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:34:40.063623 | orchestrator | 2025-02-10 09:34:40.063635 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-10 09:34:40.063674 | orchestrator | Monday 10 February 2025 09:34:06 +0000 (0:00:00.854) 0:00:01.636 ******* 2025-02-10 09:34:40.063686 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-02-10 09:34:40.063697 | orchestrator | 2025-02-10 09:34:40.063708 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-10 09:34:40.063719 | orchestrator | Monday 10 February 2025 09:34:06 +0000 (0:00:00.269) 0:00:01.905 ******* 2025-02-10 09:34:40.063730 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.063750 | orchestrator | 2025-02-10 09:34:40.063771 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-10 09:34:40.063913 | orchestrator | Monday 10 February 2025 09:34:07 +0000 (0:00:00.749) 0:00:02.655 ******* 2025-02-10 09:34:40.063930 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.063942 | orchestrator | 2025-02-10 09:34:40.063953 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-10 09:34:40.063964 | orchestrator | Monday 10 February 2025 09:34:07 +0000 (0:00:00.144) 0:00:02.800 ******* 2025-02-10 09:34:40.064002 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064014 | orchestrator | 2025-02-10 09:34:40.064042 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-10 09:34:40.064054 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.473) 0:00:03.273 ******* 2025-02-10 09:34:40.064065 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064076 | orchestrator | 2025-02-10 09:34:40.064088 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-10 09:34:40.064099 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.159) 0:00:03.432 ******* 2025-02-10 09:34:40.064110 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064122 | orchestrator | 2025-02-10 09:34:40.064133 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-10 09:34:40.064144 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.145) 0:00:03.578 ******* 2025-02-10 09:34:40.064155 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064166 | orchestrator | 2025-02-10 09:34:40.064177 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-10 09:34:40.064189 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.166) 0:00:03.744 ******* 2025-02-10 09:34:40.064200 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.064212 | orchestrator | 2025-02-10 09:34:40.064223 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-10 09:34:40.064234 | orchestrator | Monday 10 February 2025 09:34:08 +0000 (0:00:00.143) 0:00:03.887 ******* 2025-02-10 09:34:40.064245 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064256 | orchestrator | 2025-02-10 09:34:40.064268 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-10 09:34:40.064279 | orchestrator | Monday 10 February 2025 09:34:09 +0000 (0:00:00.364) 0:00:04.252 ******* 2025-02-10 09:34:40.064290 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:34:40.064301 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:34:40.064312 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:34:40.064323 | orchestrator | 2025-02-10 09:34:40.064334 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-10 09:34:40.064345 | orchestrator | Monday 10 February 2025 09:34:09 +0000 (0:00:00.795) 0:00:05.047 ******* 2025-02-10 09:34:40.064356 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064367 | orchestrator | 2025-02-10 09:34:40.064378 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-10 09:34:40.064389 | orchestrator | Monday 10 February 2025 09:34:10 +0000 (0:00:00.270) 0:00:05.318 ******* 2025-02-10 09:34:40.064401 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:34:40.064411 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:34:40.064423 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:34:40.064444 | orchestrator | 2025-02-10 09:34:40.064455 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-10 09:34:40.064466 | orchestrator | Monday 10 February 2025 09:34:12 +0000 (0:00:02.156) 0:00:07.474 ******* 2025-02-10 09:34:40.064478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:34:40.064489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:34:40.064500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:34:40.064511 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.064524 | orchestrator | 2025-02-10 09:34:40.064537 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-10 09:34:40.064561 | orchestrator | Monday 10 February 2025 09:34:12 +0000 (0:00:00.469) 0:00:07.943 ******* 2025-02-10 09:34:40.064577 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-10 09:34:40.064593 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-10 09:34:40.064606 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-10 09:34:40.064618 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.064631 | orchestrator | 2025-02-10 09:34:40.064644 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-10 09:34:40.064656 | orchestrator | Monday 10 February 2025 09:34:13 +0000 (0:00:00.941) 0:00:08.885 ******* 2025-02-10 09:34:40.064673 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:34:40.064690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:34:40.064702 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-10 09:34:40.064713 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.064724 | orchestrator | 2025-02-10 09:34:40.064736 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-10 09:34:40.064747 | orchestrator | Monday 10 February 2025 09:34:13 +0000 (0:00:00.173) 0:00:09.058 ******* 2025-02-10 09:34:40.064797 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '4484a4da621b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-10 09:34:10.916612', 'end': '2025-02-10 09:34:10.960778', 'delta': '0:00:00.044166', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4484a4da621b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-10 09:34:40.064821 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '20ef0d984121', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-10 09:34:11.585477', 'end': '2025-02-10 09:34:11.630027', 'delta': '0:00:00.044550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['20ef0d984121'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-10 09:34:40.064843 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a6a262b85557', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-10 09:34:12.165820', 'end': '2025-02-10 09:34:12.203654', 'delta': '0:00:00.037834', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6a262b85557'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-10 09:34:40.064856 | orchestrator | 2025-02-10 09:34:40.064867 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-10 09:34:40.064879 | orchestrator | Monday 10 February 2025 09:34:14 +0000 (0:00:00.362) 0:00:09.420 ******* 2025-02-10 09:34:40.064890 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.064906 | orchestrator | 2025-02-10 09:34:40.064917 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-10 09:34:40.064933 | orchestrator | Monday 10 February 2025 09:34:15 +0000 (0:00:00.891) 0:00:10.312 ******* 2025-02-10 09:34:40.064945 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-02-10 09:34:40.064956 | orchestrator | 2025-02-10 09:34:40.064967 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-10 09:34:40.064997 | orchestrator | Monday 10 February 2025 09:34:16 +0000 (0:00:01.617) 0:00:11.929 ******* 2025-02-10 09:34:40.065008 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065020 | orchestrator | 2025-02-10 09:34:40.065052 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-10 09:34:40.065064 | orchestrator | Monday 10 February 2025 09:34:16 +0000 (0:00:00.161) 0:00:12.090 ******* 2025-02-10 09:34:40.065075 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065086 | orchestrator | 2025-02-10 09:34:40.065098 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:34:40.065109 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:00.287) 0:00:12.377 ******* 2025-02-10 09:34:40.065121 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065132 | orchestrator | 2025-02-10 09:34:40.065155 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-10 09:34:40.065167 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:00.148) 0:00:12.526 ******* 2025-02-10 09:34:40.065178 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.065190 | orchestrator | 2025-02-10 09:34:40.065201 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-10 09:34:40.065212 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:00.163) 0:00:12.689 ******* 2025-02-10 09:34:40.065230 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065242 | orchestrator | 2025-02-10 09:34:40.065253 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-10 09:34:40.065265 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:00.281) 0:00:12.971 ******* 2025-02-10 09:34:40.065276 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065287 | orchestrator | 2025-02-10 09:34:40.065298 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-10 09:34:40.065310 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:00.142) 0:00:13.113 ******* 2025-02-10 09:34:40.065321 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065332 | orchestrator | 2025-02-10 09:34:40.065343 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-10 09:34:40.065354 | orchestrator | Monday 10 February 2025 09:34:18 +0000 (0:00:00.166) 0:00:13.279 ******* 2025-02-10 09:34:40.065366 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065377 | orchestrator | 2025-02-10 09:34:40.065388 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-10 09:34:40.065399 | orchestrator | Monday 10 February 2025 09:34:18 +0000 (0:00:00.149) 0:00:13.428 ******* 2025-02-10 09:34:40.065411 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065422 | orchestrator | 2025-02-10 09:34:40.065433 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-10 09:34:40.065445 | orchestrator | Monday 10 February 2025 09:34:18 +0000 (0:00:00.365) 0:00:13.794 ******* 2025-02-10 09:34:40.065456 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065467 | orchestrator | 2025-02-10 09:34:40.065478 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-10 09:34:40.065489 | orchestrator | Monday 10 February 2025 09:34:18 +0000 (0:00:00.169) 0:00:13.963 ******* 2025-02-10 09:34:40.065500 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065511 | orchestrator | 2025-02-10 09:34:40.065523 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-10 09:34:40.065534 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:00.166) 0:00:14.130 ******* 2025-02-10 09:34:40.065545 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.065556 | orchestrator | 2025-02-10 09:34:40.065567 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-10 09:34:40.065579 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:00.169) 0:00:14.299 ******* 2025-02-10 09:34:40.065590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-10 09:34:40.065902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part1', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part14', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part15', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part16', 'scsi-SQEMU_QEMU_HARDDISK_f264afce-82d2-497c-9a77-eb4255e0ba66-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:34:40.065939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f2f0c75-1857-43ef-b86a-d1c385559ce2', 'scsi-SQEMU_QEMU_HARDDISK_3f2f0c75-1857-43ef-b86a-d1c385559ce2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:34:40.065959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c8bf85e-c93c-4dde-a0b9-becc690957dc', 'scsi-SQEMU_QEMU_HARDDISK_4c8bf85e-c93c-4dde-a0b9-becc690957dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:34:40.065999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75df373f-19f7-4c01-b032-3384165fc32e', 'scsi-SQEMU_QEMU_HARDDISK_75df373f-19f7-4c01-b032-3384165fc32e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:34:40.066013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-10-08-33-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-10 09:34:40.066135 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066158 | orchestrator | 2025-02-10 09:34:40.066174 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-10 09:34:40.066188 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:00.415) 0:00:14.715 ******* 2025-02-10 09:34:40.066202 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066217 | orchestrator | 2025-02-10 09:34:40.066231 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-10 09:34:40.066245 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:00.329) 0:00:15.044 ******* 2025-02-10 09:34:40.066259 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066273 | orchestrator | 2025-02-10 09:34:40.066287 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-10 09:34:40.066315 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:00.208) 0:00:15.252 ******* 2025-02-10 09:34:40.066330 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066344 | orchestrator | 2025-02-10 09:34:40.066358 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-10 09:34:40.066372 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:00.157) 0:00:15.410 ******* 2025-02-10 09:34:40.066396 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.066413 | orchestrator | 2025-02-10 09:34:40.066429 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-10 09:34:40.066455 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:00.571) 0:00:15.981 ******* 2025-02-10 09:34:40.066471 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.066488 | orchestrator | 2025-02-10 09:34:40.066504 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:34:40.066519 | orchestrator | Monday 10 February 2025 09:34:21 +0000 (0:00:00.204) 0:00:16.186 ******* 2025-02-10 09:34:40.066535 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.066550 | orchestrator | 2025-02-10 09:34:40.066566 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:34:40.066582 | orchestrator | Monday 10 February 2025 09:34:21 +0000 (0:00:00.924) 0:00:17.110 ******* 2025-02-10 09:34:40.066598 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.066614 | orchestrator | 2025-02-10 09:34:40.066630 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-10 09:34:40.066646 | orchestrator | Monday 10 February 2025 09:34:22 +0000 (0:00:00.273) 0:00:17.384 ******* 2025-02-10 09:34:40.066662 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066678 | orchestrator | 2025-02-10 09:34:40.066693 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-10 09:34:40.066709 | orchestrator | Monday 10 February 2025 09:34:22 +0000 (0:00:00.358) 0:00:17.742 ******* 2025-02-10 09:34:40.066726 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066740 | orchestrator | 2025-02-10 09:34:40.066754 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-10 09:34:40.066768 | orchestrator | Monday 10 February 2025 09:34:22 +0000 (0:00:00.177) 0:00:17.919 ******* 2025-02-10 09:34:40.066782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:34:40.066797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:34:40.066811 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:34:40.066825 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.066842 | orchestrator | 2025-02-10 09:34:40.066866 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-10 09:34:40.067069 | orchestrator | Monday 10 February 2025 09:34:23 +0000 (0:00:00.608) 0:00:18.528 ******* 2025-02-10 09:34:40.067087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:34:40.067102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:34:40.067116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:34:40.067130 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.067144 | orchestrator | 2025-02-10 09:34:40.067158 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-10 09:34:40.067172 | orchestrator | Monday 10 February 2025 09:34:24 +0000 (0:00:00.618) 0:00:19.146 ******* 2025-02-10 09:34:40.067186 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:34:40.067200 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:34:40.067214 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:34:40.067228 | orchestrator | 2025-02-10 09:34:40.067242 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-10 09:34:40.067256 | orchestrator | Monday 10 February 2025 09:34:25 +0000 (0:00:01.595) 0:00:20.742 ******* 2025-02-10 09:34:40.067270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:34:40.067284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:34:40.067298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:34:40.067311 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.067325 | orchestrator | 2025-02-10 09:34:40.067339 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-10 09:34:40.067353 | orchestrator | Monday 10 February 2025 09:34:25 +0000 (0:00:00.261) 0:00:21.003 ******* 2025-02-10 09:34:40.067367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-10 09:34:40.067381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-10 09:34:40.067406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-10 09:34:40.067420 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.067434 | orchestrator | 2025-02-10 09:34:40.067448 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-10 09:34:40.067462 | orchestrator | Monday 10 February 2025 09:34:26 +0000 (0:00:00.313) 0:00:21.316 ******* 2025-02-10 09:34:40.067476 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-10 09:34:40.067490 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-10 09:34:40.067505 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-10 09:34:40.067519 | orchestrator | 2025-02-10 09:34:40.067533 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-10 09:34:40.067547 | orchestrator | Monday 10 February 2025 09:34:26 +0000 (0:00:00.520) 0:00:21.836 ******* 2025-02-10 09:34:40.067561 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.067574 | orchestrator | 2025-02-10 09:34:40.067588 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-10 09:34:40.067602 | orchestrator | Monday 10 February 2025 09:34:26 +0000 (0:00:00.171) 0:00:22.008 ******* 2025-02-10 09:34:40.067616 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:34:40.067630 | orchestrator | 2025-02-10 09:34:40.067644 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-10 09:34:40.067659 | orchestrator | Monday 10 February 2025 09:34:27 +0000 (0:00:00.171) 0:00:22.179 ******* 2025-02-10 09:34:40.067675 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:34:40.067708 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:34:40.067724 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:34:40.067740 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-10 09:34:40.067757 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:34:40.067773 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:34:40.067789 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:34:40.067805 | orchestrator | 2025-02-10 09:34:40.067821 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-10 09:34:40.067837 | orchestrator | Monday 10 February 2025 09:34:28 +0000 (0:00:01.093) 0:00:23.272 ******* 2025-02-10 09:34:40.067853 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:34:40.067870 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-10 09:34:40.067887 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-10 09:34:40.067902 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-10 09:34:40.067918 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-10 09:34:40.067934 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-10 09:34:40.067950 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-10 09:34:40.067964 | orchestrator | 2025-02-10 09:34:40.067999 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-02-10 09:34:40.068014 | orchestrator | Monday 10 February 2025 09:34:29 +0000 (0:00:01.844) 0:00:25.117 ******* 2025-02-10 09:34:40.068027 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:34:40.068042 | orchestrator | 2025-02-10 09:34:40.068055 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-02-10 09:34:40.068069 | orchestrator | Monday 10 February 2025 09:34:30 +0000 (0:00:00.515) 0:00:25.632 ******* 2025-02-10 09:34:40.068091 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:34:40.068105 | orchestrator | 2025-02-10 09:34:40.068119 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-02-10 09:34:40.068134 | orchestrator | Monday 10 February 2025 09:34:31 +0000 (0:00:00.722) 0:00:26.355 ******* 2025-02-10 09:34:40.068148 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-02-10 09:34:40.068161 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-02-10 09:34:40.068175 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-02-10 09:34:40.068189 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-02-10 09:34:40.068203 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-02-10 09:34:40.068217 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-02-10 09:34:40.068231 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-02-10 09:34:40.068245 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-02-10 09:34:40.068259 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-02-10 09:34:40.068274 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-02-10 09:34:40.068288 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-02-10 09:34:40.068302 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-02-10 09:34:40.068326 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-02-10 09:34:40.068480 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-02-10 09:34:40.068521 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-02-10 09:34:40.068536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-02-10 09:34:40.068550 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-02-10 09:34:40.068564 | orchestrator | 2025-02-10 09:34:40.068578 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:34:40.068592 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-02-10 09:34:40.068608 | orchestrator | 2025-02-10 09:34:40.068622 | orchestrator | 2025-02-10 09:34:40.068888 | orchestrator | 2025-02-10 09:34:40.068906 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:34:40.068920 | orchestrator | Monday 10 February 2025 09:34:38 +0000 (0:00:06.884) 0:00:33.239 ******* 2025-02-10 09:34:40.068934 | orchestrator | =============================================================================== 2025-02-10 09:34:40.068948 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.88s 2025-02-10 09:34:40.068963 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.16s 2025-02-10 09:34:40.069140 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.84s 2025-02-10 09:34:40.069181 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.62s 2025-02-10 09:34:43.115504 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.60s 2025-02-10 09:34:43.115643 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.09s 2025-02-10 09:34:43.115662 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.94s 2025-02-10 09:34:43.115678 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.92s 2025-02-10 09:34:43.115692 | orchestrator | ceph-facts : set_fact _container_exec_cmd ------------------------------- 0.89s 2025-02-10 09:34:43.115741 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.85s 2025-02-10 09:34:43.115756 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.80s 2025-02-10 09:34:43.115771 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.75s 2025-02-10 09:34:43.115785 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.72s 2025-02-10 09:34:43.115799 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.62s 2025-02-10 09:34:43.115813 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.61s 2025-02-10 09:34:43.115827 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.57s 2025-02-10 09:34:43.115841 | orchestrator | ceph-facts : set_fact _current_monitor_address -------------------------- 0.52s 2025-02-10 09:34:43.115855 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.52s 2025-02-10 09:34:43.115869 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.47s 2025-02-10 09:34:43.115883 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.47s 2025-02-10 09:34:43.115898 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:43.115913 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:43.115927 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:43.115941 | orchestrator | 2025-02-10 09:34:40 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:43.115955 | orchestrator | 2025-02-10 09:34:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:43.116017 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:43.117849 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state STARTED 2025-02-10 09:34:43.118459 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:43.120703 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:43.123781 | orchestrator | 2025-02-10 09:34:43 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:46.171170 | orchestrator | 2025-02-10 09:34:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:46.171338 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:46.171879 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 7740f2cf-6af3-4ce8-a66b-95457744d624 is in state SUCCESS 2025-02-10 09:34:46.171921 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:46.172613 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:46.173492 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:46.174888 | orchestrator | 2025-02-10 09:34:46 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:34:46.175034 | orchestrator | 2025-02-10 09:34:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:49.221962 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:49.222344 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:49.226416 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:49.227296 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:49.228196 | orchestrator | 2025-02-10 09:34:49 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:34:52.317530 | orchestrator | 2025-02-10 09:34:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:52.317700 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:52.318561 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:52.320716 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:52.324626 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:52.326631 | orchestrator | 2025-02-10 09:34:52 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:34:55.365471 | orchestrator | 2025-02-10 09:34:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:55.365627 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:55.367212 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:55.367256 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:55.368288 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:34:55.369604 | orchestrator | 2025-02-10 09:34:55 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:34:58.423371 | orchestrator | 2025-02-10 09:34:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:34:58.423520 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:34:58.425236 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:34:58.428106 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:34:58.429252 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:01.476296 | orchestrator | 2025-02-10 09:34:58 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:01.476430 | orchestrator | 2025-02-10 09:34:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:01.476460 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:01.477824 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:01.480682 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:01.483347 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:01.486739 | orchestrator | 2025-02-10 09:35:01 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:04.530439 | orchestrator | 2025-02-10 09:35:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:04.530674 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:04.531715 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:04.532818 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:04.535410 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:04.536943 | orchestrator | 2025-02-10 09:35:04 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:04.537061 | orchestrator | 2025-02-10 09:35:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:07.593821 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:07.595120 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:07.595175 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:07.596103 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:07.597087 | orchestrator | 2025-02-10 09:35:07 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:10.652219 | orchestrator | 2025-02-10 09:35:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:10.652427 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:10.652657 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:10.654422 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:10.655328 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:10.657532 | orchestrator | 2025-02-10 09:35:10 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:10.658261 | orchestrator | 2025-02-10 09:35:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:13.715061 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:13.716258 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:13.716858 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:13.719744 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:13.722847 | orchestrator | 2025-02-10 09:35:13 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:16.773643 | orchestrator | 2025-02-10 09:35:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:16.773826 | orchestrator | 2025-02-10 09:35:16 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:16.777216 | orchestrator | 2025-02-10 09:35:16 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:16.777256 | orchestrator | 2025-02-10 09:35:16 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:16.777861 | orchestrator | 2025-02-10 09:35:16 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:16.781007 | orchestrator | 2025-02-10 09:35:16 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:16.781449 | orchestrator | 2025-02-10 09:35:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:19.815226 | orchestrator | 2025-02-10 09:35:19 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:19.817846 | orchestrator | 2025-02-10 09:35:19 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:19.817883 | orchestrator | 2025-02-10 09:35:19 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:19.817908 | orchestrator | 2025-02-10 09:35:19 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:19.818266 | orchestrator | 2025-02-10 09:35:19 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:19.818413 | orchestrator | 2025-02-10 09:35:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:22.868461 | orchestrator | 2025-02-10 09:35:22 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:22.868877 | orchestrator | 2025-02-10 09:35:22 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:22.868923 | orchestrator | 2025-02-10 09:35:22 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:22.869743 | orchestrator | 2025-02-10 09:35:22 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:22.871369 | orchestrator | 2025-02-10 09:35:22 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:25.909522 | orchestrator | 2025-02-10 09:35:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:25.909653 | orchestrator | 2025-02-10 09:35:25 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:25.909890 | orchestrator | 2025-02-10 09:35:25 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:25.909917 | orchestrator | 2025-02-10 09:35:25 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:25.910817 | orchestrator | 2025-02-10 09:35:25 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:25.911462 | orchestrator | 2025-02-10 09:35:25 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:28.967790 | orchestrator | 2025-02-10 09:35:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:28.967968 | orchestrator | 2025-02-10 09:35:28 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:28.971238 | orchestrator | 2025-02-10 09:35:28 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:28.971318 | orchestrator | 2025-02-10 09:35:28 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:28.971358 | orchestrator | 2025-02-10 09:35:28 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:28.972265 | orchestrator | 2025-02-10 09:35:28 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:32.019164 | orchestrator | 2025-02-10 09:35:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:32.019411 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:32.019677 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:32.019739 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:32.019751 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:32.019760 | orchestrator | 2025-02-10 09:35:32 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:32.019775 | orchestrator | 2025-02-10 09:35:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:35.086723 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:35.089556 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:35.090141 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:35.090886 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:35.094416 | orchestrator | 2025-02-10 09:35:35 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:38.151681 | orchestrator | 2025-02-10 09:35:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:38.151808 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:38.153218 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:38.153251 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:38.154470 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:38.156673 | orchestrator | 2025-02-10 09:35:38 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:41.257550 | orchestrator | 2025-02-10 09:35:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:41.257678 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:41.260617 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:41.260641 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:41.261836 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:41.264311 | orchestrator | 2025-02-10 09:35:41 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state STARTED 2025-02-10 09:35:41.265134 | orchestrator | 2025-02-10 09:35:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:44.289429 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:35:44.293962 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:44.294467 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:44.294664 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:44.294697 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:44.294714 | orchestrator | 2025-02-10 09:35:44 | INFO  | Task 09efd800-132a-4a16-a85b-316fcac545b1 is in state SUCCESS 2025-02-10 09:35:44.294729 | orchestrator | 2025-02-10 09:35:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:44.294773 | orchestrator | 2025-02-10 09:35:44.294845 | orchestrator | 2025-02-10 09:35:44.294860 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-02-10 09:35:44.294875 | orchestrator | 2025-02-10 09:35:44.294889 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-02-10 09:35:44.294904 | orchestrator | Monday 10 February 2025 09:33:55 +0000 (0:00:00.165) 0:00:00.165 ******* 2025-02-10 09:35:44.294918 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-02-10 09:35:44.294932 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:35:44.294966 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:35:44.294981 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:35:44.295037 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:35:44.295054 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-02-10 09:35:44.295067 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-02-10 09:35:44.295081 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-02-10 09:35:44.295095 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-02-10 09:35:44.295109 | orchestrator | 2025-02-10 09:35:44.295123 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-02-10 09:35:44.295137 | orchestrator | Monday 10 February 2025 09:33:58 +0000 (0:00:03.316) 0:00:03.482 ******* 2025-02-10 09:35:44.295151 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-02-10 09:35:44.295164 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:35:44.295178 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:35:44.295193 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:35:44.295206 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-10 09:35:44.295220 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-02-10 09:35:44.295234 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-02-10 09:35:44.295248 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-02-10 09:35:44.295262 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-02-10 09:35:44.295276 | orchestrator | 2025-02-10 09:35:44.295290 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-02-10 09:35:44.295307 | orchestrator | Monday 10 February 2025 09:33:58 +0000 (0:00:00.269) 0:00:03.751 ******* 2025-02-10 09:35:44.295322 | orchestrator | ok: [testbed-manager] => { 2025-02-10 09:35:44.295342 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-02-10 09:35:44.295360 | orchestrator | } 2025-02-10 09:35:44.295375 | orchestrator | 2025-02-10 09:35:44.295390 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-02-10 09:35:44.295406 | orchestrator | Monday 10 February 2025 09:33:58 +0000 (0:00:00.182) 0:00:03.933 ******* 2025-02-10 09:35:44.295421 | orchestrator | changed: [testbed-manager] 2025-02-10 09:35:44.295436 | orchestrator | 2025-02-10 09:35:44.295452 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-02-10 09:35:44.295467 | orchestrator | Monday 10 February 2025 09:34:39 +0000 (0:00:40.063) 0:00:43.997 ******* 2025-02-10 09:35:44.295484 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-02-10 09:35:44.295511 | orchestrator | 2025-02-10 09:35:44.295527 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-02-10 09:35:44.295542 | orchestrator | Monday 10 February 2025 09:34:39 +0000 (0:00:00.734) 0:00:44.731 ******* 2025-02-10 09:35:44.295559 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-02-10 09:35:44.295577 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-02-10 09:35:44.295592 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-02-10 09:35:44.295619 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-02-10 09:35:47.326482 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-02-10 09:35:47.326631 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-02-10 09:35:47.326670 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-02-10 09:35:47.326686 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-02-10 09:35:47.326701 | orchestrator | 2025-02-10 09:35:47.326717 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-02-10 09:35:47.326733 | orchestrator | Monday 10 February 2025 09:34:42 +0000 (0:00:03.206) 0:00:47.938 ******* 2025-02-10 09:35:47.326747 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:35:47.326763 | orchestrator | 2025-02-10 09:35:47.326777 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:35:47.326793 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:35:47.326807 | orchestrator | 2025-02-10 09:35:47.326821 | orchestrator | 2025-02-10 09:35:47.326835 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:35:47.326849 | orchestrator | Monday 10 February 2025 09:34:42 +0000 (0:00:00.044) 0:00:47.982 ******* 2025-02-10 09:35:47.326863 | orchestrator | =============================================================================== 2025-02-10 09:35:47.326895 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 40.06s 2025-02-10 09:35:47.326910 | orchestrator | Check ceph keys --------------------------------------------------------- 3.32s 2025-02-10 09:35:47.326924 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.21s 2025-02-10 09:35:47.326938 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.73s 2025-02-10 09:35:47.326952 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.27s 2025-02-10 09:35:47.326966 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-02-10 09:35:47.326982 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.04s 2025-02-10 09:35:47.327024 | orchestrator | 2025-02-10 09:35:47.327062 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:35:47.328198 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:47.328587 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:47.329146 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:47.329697 | orchestrator | 2025-02-10 09:35:47 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:50.361885 | orchestrator | 2025-02-10 09:35:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:50.362122 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:35:50.362780 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:50.362831 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:50.368572 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:50.372147 | orchestrator | 2025-02-10 09:35:50 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:53.406798 | orchestrator | 2025-02-10 09:35:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:53.406986 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:35:53.407232 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:53.407254 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:53.407278 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:53.408242 | orchestrator | 2025-02-10 09:35:53 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:56.449275 | orchestrator | 2025-02-10 09:35:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:56.449435 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:35:56.451222 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:56.451284 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:56.451799 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:56.452521 | orchestrator | 2025-02-10 09:35:56 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:35:56.452603 | orchestrator | 2025-02-10 09:35:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:35:59.493197 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:35:59.494959 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:35:59.495207 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:35:59.497375 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:35:59.497445 | orchestrator | 2025-02-10 09:35:59 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:02.543505 | orchestrator | 2025-02-10 09:35:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:02.543697 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:36:05.578927 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:05.579290 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:05.579326 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:05.579342 | orchestrator | 2025-02-10 09:36:02 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:05.579359 | orchestrator | 2025-02-10 09:36:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:05.579394 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:36:05.581332 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:05.581384 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:05.581419 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:05.581910 | orchestrator | 2025-02-10 09:36:05 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:08.611556 | orchestrator | 2025-02-10 09:36:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:08.611683 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:36:08.614193 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:08.614237 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:08.614511 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:08.618730 | orchestrator | 2025-02-10 09:36:08 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:11.642956 | orchestrator | 2025-02-10 09:36:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:11.643361 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:36:11.643801 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:11.643848 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:11.643880 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:11.644346 | orchestrator | 2025-02-10 09:36:11 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:11.645343 | orchestrator | 2025-02-10 09:36:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:14.687536 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:36:14.688694 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:14.690137 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:14.690907 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:14.692244 | orchestrator | 2025-02-10 09:36:14 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:17.726454 | orchestrator | 2025-02-10 09:36:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:17.726607 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state STARTED 2025-02-10 09:36:17.727142 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:17.727176 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:17.727563 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:17.728068 | orchestrator | 2025-02-10 09:36:17 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:20.762207 | orchestrator | 2025-02-10 09:36:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:20.762362 | orchestrator | 2025-02-10 09:36:20.762384 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-02-10 09:36:20.762400 | orchestrator | 2025-02-10 09:36:20.762415 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-02-10 09:36:20.762429 | orchestrator | Monday 10 February 2025 09:34:47 +0000 (0:00:00.205) 0:00:00.205 ******* 2025-02-10 09:36:20.762444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-02-10 09:36:20.762460 | orchestrator | 2025-02-10 09:36:20.762658 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-02-10 09:36:20.762683 | orchestrator | Monday 10 February 2025 09:34:47 +0000 (0:00:00.270) 0:00:00.476 ******* 2025-02-10 09:36:20.762698 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-02-10 09:36:20.762713 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-02-10 09:36:20.762728 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-02-10 09:36:20.762743 | orchestrator | 2025-02-10 09:36:20.762757 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-02-10 09:36:20.762772 | orchestrator | Monday 10 February 2025 09:34:49 +0000 (0:00:01.456) 0:00:01.932 ******* 2025-02-10 09:36:20.762910 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-02-10 09:36:20.762932 | orchestrator | 2025-02-10 09:36:20.762946 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-02-10 09:36:20.762961 | orchestrator | Monday 10 February 2025 09:34:50 +0000 (0:00:01.442) 0:00:03.375 ******* 2025-02-10 09:36:20.762975 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:20.762991 | orchestrator | 2025-02-10 09:36:20.763031 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-02-10 09:36:20.763047 | orchestrator | Monday 10 February 2025 09:34:51 +0000 (0:00:01.399) 0:00:04.775 ******* 2025-02-10 09:36:20.763061 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:20.763074 | orchestrator | 2025-02-10 09:36:20.763088 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-02-10 09:36:20.763102 | orchestrator | Monday 10 February 2025 09:34:53 +0000 (0:00:01.092) 0:00:05.868 ******* 2025-02-10 09:36:20.763116 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-02-10 09:36:20.763131 | orchestrator | ok: [testbed-manager] 2025-02-10 09:36:20.763146 | orchestrator | 2025-02-10 09:36:20.763160 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-02-10 09:36:20.763174 | orchestrator | Monday 10 February 2025 09:35:29 +0000 (0:00:36.400) 0:00:42.269 ******* 2025-02-10 09:36:20.763188 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-02-10 09:36:20.763233 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-02-10 09:36:20.763248 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-02-10 09:36:20.763262 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-02-10 09:36:20.763276 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-02-10 09:36:20.763290 | orchestrator | 2025-02-10 09:36:20.763304 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-02-10 09:36:20.763318 | orchestrator | Monday 10 February 2025 09:35:34 +0000 (0:00:04.671) 0:00:46.940 ******* 2025-02-10 09:36:20.763331 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-02-10 09:36:20.763345 | orchestrator | 2025-02-10 09:36:20.763359 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-02-10 09:36:20.763373 | orchestrator | Monday 10 February 2025 09:35:34 +0000 (0:00:00.588) 0:00:47.528 ******* 2025-02-10 09:36:20.763387 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:36:20.763401 | orchestrator | 2025-02-10 09:36:20.763415 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-02-10 09:36:20.763429 | orchestrator | Monday 10 February 2025 09:35:34 +0000 (0:00:00.147) 0:00:47.676 ******* 2025-02-10 09:36:20.763443 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:36:20.763457 | orchestrator | 2025-02-10 09:36:20.763471 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-02-10 09:36:20.763484 | orchestrator | Monday 10 February 2025 09:35:35 +0000 (0:00:00.329) 0:00:48.005 ******* 2025-02-10 09:36:20.763498 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:20.763512 | orchestrator | 2025-02-10 09:36:20.763526 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-02-10 09:36:20.763558 | orchestrator | Monday 10 February 2025 09:35:38 +0000 (0:00:03.179) 0:00:51.184 ******* 2025-02-10 09:36:20.763574 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:20.763590 | orchestrator | 2025-02-10 09:36:20.763605 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-02-10 09:36:20.763621 | orchestrator | Monday 10 February 2025 09:35:39 +0000 (0:00:00.938) 0:00:52.123 ******* 2025-02-10 09:36:20.763636 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:20.763652 | orchestrator | 2025-02-10 09:36:20.763667 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-02-10 09:36:20.763683 | orchestrator | Monday 10 February 2025 09:35:40 +0000 (0:00:00.715) 0:00:52.838 ******* 2025-02-10 09:36:20.763698 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-02-10 09:36:20.763714 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-02-10 09:36:20.763729 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-02-10 09:36:20.763744 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-02-10 09:36:20.763760 | orchestrator | 2025-02-10 09:36:20.763775 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:36:20.763791 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:36:20.763808 | orchestrator | 2025-02-10 09:36:20.763824 | orchestrator | 2025-02-10 09:36:20.763853 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:36:20.764275 | orchestrator | Monday 10 February 2025 09:35:41 +0000 (0:00:01.719) 0:00:54.557 ******* 2025-02-10 09:36:20.764304 | orchestrator | =============================================================================== 2025-02-10 09:36:20.764317 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.40s 2025-02-10 09:36:20.764329 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.67s 2025-02-10 09:36:20.764342 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 3.18s 2025-02-10 09:36:20.764355 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.72s 2025-02-10 09:36:20.764367 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.46s 2025-02-10 09:36:20.764395 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.44s 2025-02-10 09:36:20.764407 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.40s 2025-02-10 09:36:20.764420 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.09s 2025-02-10 09:36:20.764432 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.94s 2025-02-10 09:36:20.764444 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.72s 2025-02-10 09:36:20.764457 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.59s 2025-02-10 09:36:20.764469 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2025-02-10 09:36:20.764481 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2025-02-10 09:36:20.764494 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-02-10 09:36:20.764506 | orchestrator | 2025-02-10 09:36:20.764518 | orchestrator | 2025-02-10 09:36:20 | INFO  | Task ed64f9a5-b1a9-4b54-bd90-5c23ceefa5a5 is in state SUCCESS 2025-02-10 09:36:20.764531 | orchestrator | 2025-02-10 09:36:20 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:20.764543 | orchestrator | 2025-02-10 09:36:20 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:20.764556 | orchestrator | 2025-02-10 09:36:20 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:20.764576 | orchestrator | 2025-02-10 09:36:20 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:23.830090 | orchestrator | 2025-02-10 09:36:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:23.830264 | orchestrator | 2025-02-10 09:36:23 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:23.830881 | orchestrator | 2025-02-10 09:36:23 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:23.830914 | orchestrator | 2025-02-10 09:36:23 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:23.830939 | orchestrator | 2025-02-10 09:36:23 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:26.866723 | orchestrator | 2025-02-10 09:36:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:26.866909 | orchestrator | 2025-02-10 09:36:26 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:26.868968 | orchestrator | 2025-02-10 09:36:26 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:26.869059 | orchestrator | 2025-02-10 09:36:26 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:26.869721 | orchestrator | 2025-02-10 09:36:26 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:26.871686 | orchestrator | 2025-02-10 09:36:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:29.915227 | orchestrator | 2025-02-10 09:36:29 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:29.916313 | orchestrator | 2025-02-10 09:36:29 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:29.918619 | orchestrator | 2025-02-10 09:36:29 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:29.922139 | orchestrator | 2025-02-10 09:36:29 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:32.960731 | orchestrator | 2025-02-10 09:36:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:32.960899 | orchestrator | 2025-02-10 09:36:32 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:32.961370 | orchestrator | 2025-02-10 09:36:32 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:32.961418 | orchestrator | 2025-02-10 09:36:32 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:32.962316 | orchestrator | 2025-02-10 09:36:32 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:36.010951 | orchestrator | 2025-02-10 09:36:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:36.011205 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:36.012367 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:36.012407 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:36.012437 | orchestrator | 2025-02-10 09:36:36 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:39.062574 | orchestrator | 2025-02-10 09:36:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:39.062729 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:39.062976 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:39.063150 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:39.063604 | orchestrator | 2025-02-10 09:36:39 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:42.097955 | orchestrator | 2025-02-10 09:36:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:42.098236 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:42.098614 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:42.098647 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:42.098673 | orchestrator | 2025-02-10 09:36:42 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:45.133963 | orchestrator | 2025-02-10 09:36:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:45.134355 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:45.135322 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:45.135358 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:45.135381 | orchestrator | 2025-02-10 09:36:45 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:48.171594 | orchestrator | 2025-02-10 09:36:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:48.171762 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:48.172073 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:48.172111 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:48.172988 | orchestrator | 2025-02-10 09:36:48 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:51.227642 | orchestrator | 2025-02-10 09:36:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:51.227844 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:51.228789 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:51.228847 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:51.228874 | orchestrator | 2025-02-10 09:36:51 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:54.306736 | orchestrator | 2025-02-10 09:36:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:54.306941 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:54.307147 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:54.307180 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state STARTED 2025-02-10 09:36:54.310456 | orchestrator | 2025-02-10 09:36:54 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:36:57.365110 | orchestrator | 2025-02-10 09:36:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:36:57.365663 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:36:57.365728 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:36:57.365755 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 4bece448-31ec-4917-a4c9-e47fed18a089 is in state SUCCESS 2025-02-10 09:36:57.365781 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-10 09:36:57.365806 | orchestrator | 2025-02-10 09:36:57.365922 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-02-10 09:36:57.365939 | orchestrator | 2025-02-10 09:36:57.365954 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-02-10 09:36:57.365969 | orchestrator | Monday 10 February 2025 09:35:46 +0000 (0:00:00.587) 0:00:00.587 ******* 2025-02-10 09:36:57.365983 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.365999 | orchestrator | 2025-02-10 09:36:57.366133 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-02-10 09:36:57.366153 | orchestrator | Monday 10 February 2025 09:35:47 +0000 (0:00:01.557) 0:00:02.144 ******* 2025-02-10 09:36:57.366190 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366205 | orchestrator | 2025-02-10 09:36:57.366220 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-02-10 09:36:57.366234 | orchestrator | Monday 10 February 2025 09:35:48 +0000 (0:00:01.103) 0:00:03.247 ******* 2025-02-10 09:36:57.366248 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366262 | orchestrator | 2025-02-10 09:36:57.366276 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-02-10 09:36:57.366297 | orchestrator | Monday 10 February 2025 09:35:49 +0000 (0:00:01.092) 0:00:04.340 ******* 2025-02-10 09:36:57.366312 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366326 | orchestrator | 2025-02-10 09:36:57.366340 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-02-10 09:36:57.366354 | orchestrator | Monday 10 February 2025 09:35:50 +0000 (0:00:01.027) 0:00:05.368 ******* 2025-02-10 09:36:57.366368 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366382 | orchestrator | 2025-02-10 09:36:57.366396 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-02-10 09:36:57.366410 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:01.134) 0:00:06.502 ******* 2025-02-10 09:36:57.366452 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366467 | orchestrator | 2025-02-10 09:36:57.366481 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-02-10 09:36:57.366495 | orchestrator | Monday 10 February 2025 09:35:52 +0000 (0:00:01.036) 0:00:07.538 ******* 2025-02-10 09:36:57.366509 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366523 | orchestrator | 2025-02-10 09:36:57.366537 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-02-10 09:36:57.366552 | orchestrator | Monday 10 February 2025 09:35:55 +0000 (0:00:02.200) 0:00:09.739 ******* 2025-02-10 09:36:57.366567 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366583 | orchestrator | 2025-02-10 09:36:57.366668 | orchestrator | TASK [Create admin user] ******************************************************* 2025-02-10 09:36:57.366689 | orchestrator | Monday 10 February 2025 09:35:56 +0000 (0:00:01.355) 0:00:11.094 ******* 2025-02-10 09:36:57.366705 | orchestrator | changed: [testbed-manager] 2025-02-10 09:36:57.366721 | orchestrator | 2025-02-10 09:36:57.366737 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-02-10 09:36:57.366754 | orchestrator | Monday 10 February 2025 09:36:12 +0000 (0:00:15.685) 0:00:26.779 ******* 2025-02-10 09:36:57.366770 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:36:57.366796 | orchestrator | 2025-02-10 09:36:57.366812 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-10 09:36:57.366975 | orchestrator | 2025-02-10 09:36:57.366991 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-10 09:36:57.367005 | orchestrator | Monday 10 February 2025 09:36:12 +0000 (0:00:00.753) 0:00:27.532 ******* 2025-02-10 09:36:57.367044 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.367060 | orchestrator | 2025-02-10 09:36:57.367074 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-10 09:36:57.367089 | orchestrator | 2025-02-10 09:36:57.367103 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-10 09:36:57.367117 | orchestrator | Monday 10 February 2025 09:36:15 +0000 (0:00:02.336) 0:00:29.869 ******* 2025-02-10 09:36:57.367131 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:57.367146 | orchestrator | 2025-02-10 09:36:57.367160 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-10 09:36:57.367604 | orchestrator | 2025-02-10 09:36:57.367625 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-10 09:36:57.367641 | orchestrator | Monday 10 February 2025 09:36:16 +0000 (0:00:01.616) 0:00:31.485 ******* 2025-02-10 09:36:57.367947 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:57.367970 | orchestrator | 2025-02-10 09:36:57.367985 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:36:57.368002 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-10 09:36:57.368058 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:36:57.368074 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:36:57.368089 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:36:57.368103 | orchestrator | 2025-02-10 09:36:57.368117 | orchestrator | 2025-02-10 09:36:57.368131 | orchestrator | 2025-02-10 09:36:57.368189 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:36:57.368206 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:01.478) 0:00:32.963 ******* 2025-02-10 09:36:57.368220 | orchestrator | =============================================================================== 2025-02-10 09:36:57.368250 | orchestrator | Create admin user ------------------------------------------------------ 15.69s 2025-02-10 09:36:57.368264 | orchestrator | Restart ceph manager service -------------------------------------------- 5.43s 2025-02-10 09:36:57.368278 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.20s 2025-02-10 09:36:57.368292 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.56s 2025-02-10 09:36:57.368306 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.36s 2025-02-10 09:36:57.368328 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.13s 2025-02-10 09:36:57.368342 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2025-02-10 09:36:57.368356 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.09s 2025-02-10 09:36:57.368370 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2025-02-10 09:36:57.368384 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.03s 2025-02-10 09:36:57.368398 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.75s 2025-02-10 09:36:57.368412 | orchestrator | 2025-02-10 09:36:57.368426 | orchestrator | 2025-02-10 09:36:57.368440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:36:57.368453 | orchestrator | 2025-02-10 09:36:57.368467 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:36:57.368481 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:00.838) 0:00:00.838 ******* 2025-02-10 09:36:57.368495 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:36:57.368510 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:36:57.368524 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:36:57.368538 | orchestrator | 2025-02-10 09:36:57.368552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:36:57.368567 | orchestrator | Monday 10 February 2025 09:34:21 +0000 (0:00:00.788) 0:00:01.627 ******* 2025-02-10 09:36:57.368583 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-02-10 09:36:57.368599 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-02-10 09:36:57.368615 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-02-10 09:36:57.368631 | orchestrator | 2025-02-10 09:36:57.368647 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-02-10 09:36:57.368662 | orchestrator | 2025-02-10 09:36:57.368678 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-10 09:36:57.368693 | orchestrator | Monday 10 February 2025 09:34:22 +0000 (0:00:01.143) 0:00:02.771 ******* 2025-02-10 09:36:57.368709 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:36:57.368727 | orchestrator | 2025-02-10 09:36:57.368742 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-02-10 09:36:57.368758 | orchestrator | Monday 10 February 2025 09:34:23 +0000 (0:00:00.834) 0:00:03.605 ******* 2025-02-10 09:36:57.368774 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-02-10 09:36:57.368789 | orchestrator | 2025-02-10 09:36:57.368804 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-02-10 09:36:57.368820 | orchestrator | Monday 10 February 2025 09:34:27 +0000 (0:00:04.809) 0:00:08.415 ******* 2025-02-10 09:36:57.368835 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-02-10 09:36:57.368851 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-02-10 09:36:57.368867 | orchestrator | 2025-02-10 09:36:57.368882 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-02-10 09:36:57.368899 | orchestrator | Monday 10 February 2025 09:34:35 +0000 (0:00:07.180) 0:00:15.595 ******* 2025-02-10 09:36:57.368915 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:36:57.368930 | orchestrator | 2025-02-10 09:36:57.368962 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-02-10 09:36:57.368986 | orchestrator | Monday 10 February 2025 09:34:38 +0000 (0:00:03.896) 0:00:19.491 ******* 2025-02-10 09:36:57.369038 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:36:57.369065 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-02-10 09:36:57.369087 | orchestrator | 2025-02-10 09:36:57.369109 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-02-10 09:36:57.369133 | orchestrator | Monday 10 February 2025 09:34:43 +0000 (0:00:04.261) 0:00:23.753 ******* 2025-02-10 09:36:57.369156 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:36:57.369180 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-02-10 09:36:57.369203 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-02-10 09:36:57.369224 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-02-10 09:36:57.369239 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-02-10 09:36:57.369252 | orchestrator | 2025-02-10 09:36:57.369266 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-02-10 09:36:57.369280 | orchestrator | Monday 10 February 2025 09:35:01 +0000 (0:00:18.054) 0:00:41.807 ******* 2025-02-10 09:36:57.369294 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-02-10 09:36:57.369307 | orchestrator | 2025-02-10 09:36:57.369321 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-02-10 09:36:57.369380 | orchestrator | Monday 10 February 2025 09:35:06 +0000 (0:00:05.611) 0:00:47.419 ******* 2025-02-10 09:36:57.369399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.369422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.369438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.369464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.369496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.369512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.369527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.369542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.369556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.369579 | orchestrator | 2025-02-10 09:36:57.369593 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-02-10 09:36:57.369607 | orchestrator | Monday 10 February 2025 09:35:10 +0000 (0:00:03.461) 0:00:50.880 ******* 2025-02-10 09:36:57.369621 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-02-10 09:36:57.369635 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-02-10 09:36:57.369648 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-02-10 09:36:57.369662 | orchestrator | 2025-02-10 09:36:57.369676 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-02-10 09:36:57.369690 | orchestrator | Monday 10 February 2025 09:35:14 +0000 (0:00:04.355) 0:00:55.236 ******* 2025-02-10 09:36:57.369704 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.369718 | orchestrator | 2025-02-10 09:36:57.369738 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-02-10 09:36:57.369752 | orchestrator | Monday 10 February 2025 09:35:15 +0000 (0:00:00.579) 0:00:55.815 ******* 2025-02-10 09:36:57.369790 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.369804 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:57.369819 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:57.369832 | orchestrator | 2025-02-10 09:36:57.369846 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-10 09:36:57.369860 | orchestrator | Monday 10 February 2025 09:35:16 +0000 (0:00:00.976) 0:00:56.792 ******* 2025-02-10 09:36:57.369874 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:36:57.369888 | orchestrator | 2025-02-10 09:36:57.369902 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-02-10 09:36:57.369916 | orchestrator | Monday 10 February 2025 09:35:17 +0000 (0:00:00.806) 0:00:57.598 ******* 2025-02-10 09:36:57.369941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.369957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.369980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.369996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370156 | orchestrator | 2025-02-10 09:36:57.370170 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-02-10 09:36:57.370185 | orchestrator | Monday 10 February 2025 09:35:21 +0000 (0:00:04.582) 0:01:02.181 ******* 2025-02-10 09:36:57.370200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.370222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370282 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.370297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.370313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370341 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:57.370365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.370382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370417 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:57.370431 | orchestrator | 2025-02-10 09:36:57.370445 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-02-10 09:36:57.370460 | orchestrator | Monday 10 February 2025 09:35:23 +0000 (0:00:01.536) 0:01:03.717 ******* 2025-02-10 09:36:57.370474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.370490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370527 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.370542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.370563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370592 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:57.370607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.370628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.370664 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:57.370679 | orchestrator | 2025-02-10 09:36:57.370693 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-02-10 09:36:57.370708 | orchestrator | Monday 10 February 2025 09:35:25 +0000 (0:00:02.090) 0:01:05.808 ******* 2025-02-10 09:36:57.370735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.370751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.370767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.370789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.370886 | orchestrator | 2025-02-10 09:36:57.370900 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-02-10 09:36:57.370914 | orchestrator | Monday 10 February 2025 09:35:30 +0000 (0:00:05.304) 0:01:11.112 ******* 2025-02-10 09:36:57.370929 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.370943 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:57.370957 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:57.370971 | orchestrator | 2025-02-10 09:36:57.370985 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-02-10 09:36:57.371006 | orchestrator | Monday 10 February 2025 09:35:35 +0000 (0:00:05.314) 0:01:16.427 ******* 2025-02-10 09:36:57.371080 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:36:57.371096 | orchestrator | 2025-02-10 09:36:57.371111 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-02-10 09:36:57.371125 | orchestrator | Monday 10 February 2025 09:35:37 +0000 (0:00:01.978) 0:01:18.406 ******* 2025-02-10 09:36:57.371139 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:57.371153 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.371167 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:57.371181 | orchestrator | 2025-02-10 09:36:57.371195 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-02-10 09:36:57.371209 | orchestrator | Monday 10 February 2025 09:35:42 +0000 (0:00:04.190) 0:01:22.596 ******* 2025-02-10 09:36:57.371223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.371239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.371255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.371271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371364 | orchestrator | 2025-02-10 09:36:57.371377 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-02-10 09:36:57.371396 | orchestrator | Monday 10 February 2025 09:35:55 +0000 (0:00:13.397) 0:01:35.994 ******* 2025-02-10 09:36:57.371418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.371432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.371445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.371458 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:57.371471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.371486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.371505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.371518 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.371538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-10 09:36:57.371552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.371565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:36:57.371578 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:57.371591 | orchestrator | 2025-02-10 09:36:57.371604 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-02-10 09:36:57.371616 | orchestrator | Monday 10 February 2025 09:35:58 +0000 (0:00:02.879) 0:01:38.873 ******* 2025-02-10 09:36:57.371629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.371656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.371670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-10 09:36:57.371683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:36:57.371817 | orchestrator | 2025-02-10 09:36:57.371838 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-10 09:36:57.371861 | orchestrator | Monday 10 February 2025 09:36:03 +0000 (0:00:05.134) 0:01:44.008 ******* 2025-02-10 09:36:57.371885 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:36:57.371907 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:36:57.371928 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:36:57.371942 | orchestrator | 2025-02-10 09:36:57.371954 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-02-10 09:36:57.371966 | orchestrator | Monday 10 February 2025 09:36:05 +0000 (0:00:01.675) 0:01:45.684 ******* 2025-02-10 09:36:57.371979 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.371991 | orchestrator | 2025-02-10 09:36:57.372003 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-02-10 09:36:57.372072 | orchestrator | Monday 10 February 2025 09:36:07 +0000 (0:00:02.156) 0:01:47.840 ******* 2025-02-10 09:36:57.372087 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.372100 | orchestrator | 2025-02-10 09:36:57.372112 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-02-10 09:36:57.372124 | orchestrator | Monday 10 February 2025 09:36:09 +0000 (0:00:02.218) 0:01:50.058 ******* 2025-02-10 09:36:57.372137 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.372150 | orchestrator | 2025-02-10 09:36:57.372169 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-10 09:36:57.372192 | orchestrator | Monday 10 February 2025 09:36:20 +0000 (0:00:11.288) 0:02:01.347 ******* 2025-02-10 09:36:57.372205 | orchestrator | 2025-02-10 09:36:57.372217 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-10 09:36:57.372229 | orchestrator | Monday 10 February 2025 09:36:21 +0000 (0:00:00.910) 0:02:02.257 ******* 2025-02-10 09:36:57.372242 | orchestrator | 2025-02-10 09:36:57.372254 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-10 09:36:57.372266 | orchestrator | Monday 10 February 2025 09:36:21 +0000 (0:00:00.238) 0:02:02.496 ******* 2025-02-10 09:36:57.372279 | orchestrator | 2025-02-10 09:36:57.372291 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-02-10 09:36:57.372303 | orchestrator | Monday 10 February 2025 09:36:22 +0000 (0:00:00.216) 0:02:02.713 ******* 2025-02-10 09:36:57.372316 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.372328 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:57.372340 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:57.372353 | orchestrator | 2025-02-10 09:36:57.372365 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-02-10 09:36:57.372377 | orchestrator | Monday 10 February 2025 09:36:31 +0000 (0:00:09.782) 0:02:12.496 ******* 2025-02-10 09:36:57.372389 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:57.372402 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.372414 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:57.372426 | orchestrator | 2025-02-10 09:36:57.372438 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-02-10 09:36:57.372451 | orchestrator | Monday 10 February 2025 09:36:44 +0000 (0:00:12.118) 0:02:24.614 ******* 2025-02-10 09:36:57.372463 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:36:57.372476 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:36:57.372488 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:36:57.372500 | orchestrator | 2025-02-10 09:36:57.372512 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:36:57.372526 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:36:57.372539 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:36:57.372552 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:36:57.372564 | orchestrator | 2025-02-10 09:36:57.372576 | orchestrator | 2025-02-10 09:36:57.372588 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:36:57.372601 | orchestrator | Monday 10 February 2025 09:36:56 +0000 (0:00:12.258) 0:02:36.873 ******* 2025-02-10 09:36:57.372611 | orchestrator | =============================================================================== 2025-02-10 09:36:57.372621 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.05s 2025-02-10 09:36:57.372638 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.40s 2025-02-10 09:37:00.419282 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.26s 2025-02-10 09:37:00.419425 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.12s 2025-02-10 09:37:00.419446 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.29s 2025-02-10 09:37:00.419461 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.78s 2025-02-10 09:37:00.419476 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.18s 2025-02-10 09:37:00.419490 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.61s 2025-02-10 09:37:00.419504 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 5.31s 2025-02-10 09:37:00.419518 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.30s 2025-02-10 09:37:00.419563 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.13s 2025-02-10 09:37:00.419577 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.81s 2025-02-10 09:37:00.419591 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.58s 2025-02-10 09:37:00.419605 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 4.36s 2025-02-10 09:37:00.419620 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.26s 2025-02-10 09:37:00.419634 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 4.19s 2025-02-10 09:37:00.419648 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.90s 2025-02-10 09:37:00.419662 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.46s 2025-02-10 09:37:00.419676 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.88s 2025-02-10 09:37:00.419690 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.22s 2025-02-10 09:37:00.419705 | orchestrator | 2025-02-10 09:36:57 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:00.419720 | orchestrator | 2025-02-10 09:36:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:00.419754 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:00.422305 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:00.422340 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:00.422364 | orchestrator | 2025-02-10 09:37:00 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:03.468390 | orchestrator | 2025-02-10 09:37:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:03.468551 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:03.469190 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:03.469545 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:06.497275 | orchestrator | 2025-02-10 09:37:03 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:06.497383 | orchestrator | 2025-02-10 09:37:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:06.497404 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:06.497810 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:06.498802 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:06.502403 | orchestrator | 2025-02-10 09:37:06 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:09.534788 | orchestrator | 2025-02-10 09:37:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:09.534958 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:12.568602 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:12.568728 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:12.568745 | orchestrator | 2025-02-10 09:37:09 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:12.568789 | orchestrator | 2025-02-10 09:37:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:12.568818 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:12.569595 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:12.570667 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:12.571451 | orchestrator | 2025-02-10 09:37:12 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:15.603776 | orchestrator | 2025-02-10 09:37:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:15.603935 | orchestrator | 2025-02-10 09:37:15 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:15.604441 | orchestrator | 2025-02-10 09:37:15 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:15.605886 | orchestrator | 2025-02-10 09:37:15 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:15.607378 | orchestrator | 2025-02-10 09:37:15 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:15.607487 | orchestrator | 2025-02-10 09:37:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:18.649282 | orchestrator | 2025-02-10 09:37:18 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:21.678660 | orchestrator | 2025-02-10 09:37:18 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:21.678804 | orchestrator | 2025-02-10 09:37:18 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:21.678823 | orchestrator | 2025-02-10 09:37:18 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:21.678840 | orchestrator | 2025-02-10 09:37:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:21.678874 | orchestrator | 2025-02-10 09:37:21 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:21.679513 | orchestrator | 2025-02-10 09:37:21 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:21.679624 | orchestrator | 2025-02-10 09:37:21 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:21.679684 | orchestrator | 2025-02-10 09:37:21 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:24.726448 | orchestrator | 2025-02-10 09:37:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:24.726598 | orchestrator | 2025-02-10 09:37:24 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:24.727418 | orchestrator | 2025-02-10 09:37:24 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:24.729318 | orchestrator | 2025-02-10 09:37:24 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:24.731139 | orchestrator | 2025-02-10 09:37:24 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:27.774723 | orchestrator | 2025-02-10 09:37:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:27.774902 | orchestrator | 2025-02-10 09:37:27 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:27.776163 | orchestrator | 2025-02-10 09:37:27 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:27.776266 | orchestrator | 2025-02-10 09:37:27 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:27.777550 | orchestrator | 2025-02-10 09:37:27 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:30.814210 | orchestrator | 2025-02-10 09:37:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:30.814391 | orchestrator | 2025-02-10 09:37:30 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:30.814553 | orchestrator | 2025-02-10 09:37:30 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:30.814591 | orchestrator | 2025-02-10 09:37:30 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:30.815286 | orchestrator | 2025-02-10 09:37:30 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:30.816567 | orchestrator | 2025-02-10 09:37:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:33.847424 | orchestrator | 2025-02-10 09:37:33 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:33.847935 | orchestrator | 2025-02-10 09:37:33 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:33.847986 | orchestrator | 2025-02-10 09:37:33 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:33.848058 | orchestrator | 2025-02-10 09:37:33 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:36.885629 | orchestrator | 2025-02-10 09:37:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:36.885796 | orchestrator | 2025-02-10 09:37:36 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:36.886515 | orchestrator | 2025-02-10 09:37:36 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:36.886583 | orchestrator | 2025-02-10 09:37:36 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:36.888258 | orchestrator | 2025-02-10 09:37:36 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:39.917565 | orchestrator | 2025-02-10 09:37:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:39.917774 | orchestrator | 2025-02-10 09:37:39 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:39.918415 | orchestrator | 2025-02-10 09:37:39 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:39.918485 | orchestrator | 2025-02-10 09:37:39 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:39.918522 | orchestrator | 2025-02-10 09:37:39 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:42.978446 | orchestrator | 2025-02-10 09:37:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:42.978664 | orchestrator | 2025-02-10 09:37:42 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:42.979124 | orchestrator | 2025-02-10 09:37:42 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:42.979163 | orchestrator | 2025-02-10 09:37:42 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:42.979498 | orchestrator | 2025-02-10 09:37:42 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:42.979531 | orchestrator | 2025-02-10 09:37:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:46.047490 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:46.048003 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:46.048079 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:46.051668 | orchestrator | 2025-02-10 09:37:46 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:49.099302 | orchestrator | 2025-02-10 09:37:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:49.099446 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:49.100321 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:49.100367 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:49.104337 | orchestrator | 2025-02-10 09:37:49 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:52.158977 | orchestrator | 2025-02-10 09:37:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:52.160075 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:55.199248 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:55.199382 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:55.199396 | orchestrator | 2025-02-10 09:37:52 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:55.199403 | orchestrator | 2025-02-10 09:37:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:55.199424 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:55.200488 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:55.201101 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:55.204213 | orchestrator | 2025-02-10 09:37:55 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:37:58.261957 | orchestrator | 2025-02-10 09:37:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:37:58.262218 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:37:58.262736 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:37:58.262767 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:37:58.262791 | orchestrator | 2025-02-10 09:37:58 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:01.302255 | orchestrator | 2025-02-10 09:37:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:01.302423 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:01.302654 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:01.303394 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:01.304487 | orchestrator | 2025-02-10 09:38:01 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:04.346429 | orchestrator | 2025-02-10 09:38:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:04.346592 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:04.346744 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:04.346761 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:04.346782 | orchestrator | 2025-02-10 09:38:04 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:07.376218 | orchestrator | 2025-02-10 09:38:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:07.376367 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:07.376512 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:07.377002 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:07.377457 | orchestrator | 2025-02-10 09:38:07 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:07.377549 | orchestrator | 2025-02-10 09:38:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:10.418246 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:10.423361 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:10.423508 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:13.454679 | orchestrator | 2025-02-10 09:38:10 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:13.454808 | orchestrator | 2025-02-10 09:38:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:13.454841 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:13.456120 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:13.456527 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:13.457928 | orchestrator | 2025-02-10 09:38:13 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:13.459287 | orchestrator | 2025-02-10 09:38:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:16.504503 | orchestrator | 2025-02-10 09:38:16 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:16.506182 | orchestrator | 2025-02-10 09:38:16 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:16.506252 | orchestrator | 2025-02-10 09:38:16 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:16.508466 | orchestrator | 2025-02-10 09:38:16 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:19.551925 | orchestrator | 2025-02-10 09:38:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:19.552120 | orchestrator | 2025-02-10 09:38:19 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:19.552436 | orchestrator | 2025-02-10 09:38:19 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:19.554263 | orchestrator | 2025-02-10 09:38:19 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:19.555096 | orchestrator | 2025-02-10 09:38:19 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:22.612233 | orchestrator | 2025-02-10 09:38:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:22.612395 | orchestrator | 2025-02-10 09:38:22 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:22.614172 | orchestrator | 2025-02-10 09:38:22 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:22.614222 | orchestrator | 2025-02-10 09:38:22 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:22.616824 | orchestrator | 2025-02-10 09:38:22 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state STARTED 2025-02-10 09:38:25.651791 | orchestrator | 2025-02-10 09:38:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:25.652106 | orchestrator | 2025-02-10 09:38:25 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:25.653301 | orchestrator | 2025-02-10 09:38:25 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:25.653332 | orchestrator | 2025-02-10 09:38:25 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:25.653348 | orchestrator | 2025-02-10 09:38:25 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:25.653370 | orchestrator | 2025-02-10 09:38:25 | INFO  | Task 19ed791c-5771-4b88-8f6a-74c48d05d997 is in state SUCCESS 2025-02-10 09:38:25.654648 | orchestrator | 2025-02-10 09:38:25.655492 | orchestrator | 2025-02-10 09:38:25.655520 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:38:25.655535 | orchestrator | 2025-02-10 09:38:25.655549 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:38:25.655564 | orchestrator | Monday 10 February 2025 09:34:18 +0000 (0:00:00.361) 0:00:00.361 ******* 2025-02-10 09:38:25.655577 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:38:25.655650 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:38:25.655666 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:38:25.655681 | orchestrator | 2025-02-10 09:38:25.655695 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:38:25.655709 | orchestrator | Monday 10 February 2025 09:34:18 +0000 (0:00:00.557) 0:00:00.919 ******* 2025-02-10 09:38:25.655724 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-02-10 09:38:25.655738 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-02-10 09:38:25.655752 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-02-10 09:38:25.655767 | orchestrator | 2025-02-10 09:38:25.655781 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-02-10 09:38:25.655795 | orchestrator | 2025-02-10 09:38:25.655809 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:38:25.655823 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:00.815) 0:00:01.735 ******* 2025-02-10 09:38:25.655837 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:38:25.655853 | orchestrator | 2025-02-10 09:38:25.655867 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-02-10 09:38:25.655881 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:01.157) 0:00:02.892 ******* 2025-02-10 09:38:25.655902 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-02-10 09:38:25.655917 | orchestrator | 2025-02-10 09:38:25.655933 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-02-10 09:38:25.655948 | orchestrator | Monday 10 February 2025 09:34:25 +0000 (0:00:04.670) 0:00:07.562 ******* 2025-02-10 09:38:25.655990 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-02-10 09:38:25.656005 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-02-10 09:38:25.656395 | orchestrator | 2025-02-10 09:38:25.656410 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-02-10 09:38:25.656424 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:07.375) 0:00:14.938 ******* 2025-02-10 09:38:25.656439 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-02-10 09:38:25.656453 | orchestrator | 2025-02-10 09:38:25.656467 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-02-10 09:38:25.656481 | orchestrator | Monday 10 February 2025 09:34:37 +0000 (0:00:04.381) 0:00:19.319 ******* 2025-02-10 09:38:25.656495 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:38:25.656509 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-02-10 09:38:25.656523 | orchestrator | 2025-02-10 09:38:25.656537 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-02-10 09:38:25.656551 | orchestrator | Monday 10 February 2025 09:34:41 +0000 (0:00:04.513) 0:00:23.833 ******* 2025-02-10 09:38:25.656564 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:38:25.656578 | orchestrator | 2025-02-10 09:38:25.656592 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-02-10 09:38:25.656606 | orchestrator | Monday 10 February 2025 09:34:45 +0000 (0:00:03.668) 0:00:27.501 ******* 2025-02-10 09:38:25.656620 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-02-10 09:38:25.656635 | orchestrator | 2025-02-10 09:38:25.656649 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-02-10 09:38:25.656663 | orchestrator | Monday 10 February 2025 09:34:50 +0000 (0:00:04.517) 0:00:32.018 ******* 2025-02-10 09:38:25.656679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.656736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.656753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.656782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.656974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657215 | orchestrator | 2025-02-10 09:38:25.657231 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-02-10 09:38:25.657275 | orchestrator | Monday 10 February 2025 09:34:53 +0000 (0:00:03.852) 0:00:35.871 ******* 2025-02-10 09:38:25.657291 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.657314 | orchestrator | 2025-02-10 09:38:25.657328 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-02-10 09:38:25.657342 | orchestrator | Monday 10 February 2025 09:34:54 +0000 (0:00:00.218) 0:00:36.089 ******* 2025-02-10 09:38:25.657356 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.657370 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:25.657384 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:25.657398 | orchestrator | 2025-02-10 09:38:25.657412 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:38:25.657434 | orchestrator | Monday 10 February 2025 09:34:54 +0000 (0:00:00.608) 0:00:36.698 ******* 2025-02-10 09:38:25.657449 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:38:25.657463 | orchestrator | 2025-02-10 09:38:25.657478 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-02-10 09:38:25.657491 | orchestrator | Monday 10 February 2025 09:34:55 +0000 (0:00:00.909) 0:00:37.607 ******* 2025-02-10 09:38:25.657506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.657521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.657536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.657551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.657848 | orchestrator | 2025-02-10 09:38:25.657861 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-02-10 09:38:25.657874 | orchestrator | Monday 10 February 2025 09:35:02 +0000 (0:00:06.811) 0:00:44.418 ******* 2025-02-10 09:38:25.657887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.657901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.657914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.657995 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.658009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.658075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.658090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658177 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:25.658197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.658212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.658225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658291 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:25.658303 | orchestrator | 2025-02-10 09:38:25.658338 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-02-10 09:38:25.658352 | orchestrator | Monday 10 February 2025 09:35:06 +0000 (0:00:03.846) 0:00:48.265 ******* 2025-02-10 09:38:25.658365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.658378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.658392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.658475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.658488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658560 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:25.658573 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.658609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.658623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.658636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.658695 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:25.658707 | orchestrator | 2025-02-10 09:38:25.658720 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-02-10 09:38:25.658732 | orchestrator | Monday 10 February 2025 09:35:09 +0000 (0:00:03.453) 0:00:51.718 ******* 2025-02-10 09:38:25.658768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.658783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.658796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.658818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.658840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.658895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.658918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.658939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.658971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.658993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.659181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.659230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'containe2025-02-10 09:38:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:25.659248 | orchestrator | r_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.659282 | orchestrator | 2025-02-10 09:38:25.659295 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-02-10 09:38:25.659308 | orchestrator | Monday 10 February 2025 09:35:18 +0000 (0:00:09.180) 0:01:00.899 ******* 2025-02-10 09:38:25.659321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.659334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.659347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.659385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.659612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.659646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.659672 | orchestrator | 2025-02-10 09:38:25.659684 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-02-10 09:38:25.659697 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:32.250) 0:01:33.150 ******* 2025-02-10 09:38:25.659710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-10 09:38:25.659722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-10 09:38:25.659734 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-10 09:38:25.659747 | orchestrator | 2025-02-10 09:38:25.659759 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-02-10 09:38:25.659772 | orchestrator | Monday 10 February 2025 09:36:03 +0000 (0:00:12.274) 0:01:45.424 ******* 2025-02-10 09:38:25.659784 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-10 09:38:25.659797 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-10 09:38:25.659809 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-10 09:38:25.659821 | orchestrator | 2025-02-10 09:38:25.659834 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-02-10 09:38:25.659847 | orchestrator | Monday 10 February 2025 09:36:08 +0000 (0:00:05.210) 0:01:50.634 ******* 2025-02-10 09:38:25.659868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.659928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.659944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.659958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.659984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660292 | orchestrator | 2025-02-10 09:38:25.660305 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-02-10 09:38:25.660317 | orchestrator | Monday 10 February 2025 09:36:13 +0000 (0:00:04.923) 0:01:55.558 ******* 2025-02-10 09:38:25.660330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.660343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.660357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.660444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.660667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660680 | orchestrator | 2025-02-10 09:38:25.660692 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:38:25.660705 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:04.590) 0:02:00.148 ******* 2025-02-10 09:38:25.660717 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.660730 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:25.660742 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:25.660755 | orchestrator | 2025-02-10 09:38:25.660768 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-02-10 09:38:25.660780 | orchestrator | Monday 10 February 2025 09:36:18 +0000 (0:00:00.378) 0:02:00.526 ******* 2025-02-10 09:38:25.660799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.660822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.660842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660925 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.660939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.660957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.660970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.660996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661085 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:25.661106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-10 09:38:25.661119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-10 09:38:25.661132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661215 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:25.661228 | orchestrator | 2025-02-10 09:38:25.661241 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-02-10 09:38:25.661253 | orchestrator | Monday 10 February 2025 09:36:20 +0000 (0:00:01.728) 0:02:02.255 ******* 2025-02-10 09:38:25.661272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.661299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.661319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-10 09:38:25.661332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:38:25.661608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-10 09:38:25.661627 | orchestrator | 2025-02-10 09:38:25.661640 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-10 09:38:25.661652 | orchestrator | Monday 10 February 2025 09:36:29 +0000 (0:00:09.386) 0:02:11.642 ******* 2025-02-10 09:38:25.661665 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:25.661677 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:25.661689 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:25.661701 | orchestrator | 2025-02-10 09:38:25.661714 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-02-10 09:38:25.661726 | orchestrator | Monday 10 February 2025 09:36:31 +0000 (0:00:01.939) 0:02:13.581 ******* 2025-02-10 09:38:25.661738 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-02-10 09:38:25.661751 | orchestrator | 2025-02-10 09:38:25.661763 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-02-10 09:38:25.661776 | orchestrator | Monday 10 February 2025 09:36:34 +0000 (0:00:02.651) 0:02:16.232 ******* 2025-02-10 09:38:25.661788 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:38:25.661801 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-02-10 09:38:25.661813 | orchestrator | 2025-02-10 09:38:25.661825 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-02-10 09:38:25.661842 | orchestrator | Monday 10 February 2025 09:36:37 +0000 (0:00:02.902) 0:02:19.135 ******* 2025-02-10 09:38:25.661855 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:25.661867 | orchestrator | 2025-02-10 09:38:25.661880 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-10 09:38:25.661892 | orchestrator | Monday 10 February 2025 09:36:53 +0000 (0:00:15.956) 0:02:35.091 ******* 2025-02-10 09:38:25.661904 | orchestrator | 2025-02-10 09:38:25.661917 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-10 09:38:25.661929 | orchestrator | Monday 10 February 2025 09:36:53 +0000 (0:00:00.262) 0:02:35.354 ******* 2025-02-10 09:38:25.661942 | orchestrator | 2025-02-10 09:38:25.661954 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-10 09:38:25.661967 | orchestrator | Monday 10 February 2025 09:36:53 +0000 (0:00:00.215) 0:02:35.570 ******* 2025-02-10 09:38:25.661979 | orchestrator | 2025-02-10 09:38:25.661991 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-02-10 09:38:25.662004 | orchestrator | Monday 10 February 2025 09:36:53 +0000 (0:00:00.196) 0:02:35.766 ******* 2025-02-10 09:38:25.662084 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:25.662101 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:25.662113 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:25.662125 | orchestrator | 2025-02-10 09:38:25.662138 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-02-10 09:38:25.662150 | orchestrator | Monday 10 February 2025 09:37:08 +0000 (0:00:14.974) 0:02:50.740 ******* 2025-02-10 09:38:25.662162 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:25.662174 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:25.662187 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:25.662199 | orchestrator | 2025-02-10 09:38:25.662211 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-02-10 09:38:25.662223 | orchestrator | Monday 10 February 2025 09:37:17 +0000 (0:00:09.053) 0:02:59.793 ******* 2025-02-10 09:38:25.662236 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:25.662248 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:25.662260 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:25.662272 | orchestrator | 2025-02-10 09:38:25.662285 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-02-10 09:38:25.662297 | orchestrator | Monday 10 February 2025 09:37:28 +0000 (0:00:10.785) 0:03:10.579 ******* 2025-02-10 09:38:25.662309 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:25.662321 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:25.662334 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:25.662346 | orchestrator | 2025-02-10 09:38:25.662365 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-02-10 09:38:25.662378 | orchestrator | Monday 10 February 2025 09:37:42 +0000 (0:00:13.700) 0:03:24.280 ******* 2025-02-10 09:38:25.662390 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:25.662402 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:25.662415 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:25.662427 | orchestrator | 2025-02-10 09:38:25.662439 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-02-10 09:38:25.662458 | orchestrator | Monday 10 February 2025 09:38:00 +0000 (0:00:18.372) 0:03:42.653 ******* 2025-02-10 09:38:28.684135 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:28.685112 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:28.685151 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:28.685169 | orchestrator | 2025-02-10 09:38:28.685187 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-02-10 09:38:28.685204 | orchestrator | Monday 10 February 2025 09:38:16 +0000 (0:00:15.709) 0:03:58.362 ******* 2025-02-10 09:38:28.685220 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:28.685235 | orchestrator | 2025-02-10 09:38:28.685250 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:38:28.685268 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:38:28.685285 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:38:28.685396 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:38:28.685416 | orchestrator | 2025-02-10 09:38:28.685431 | orchestrator | 2025-02-10 09:38:28.685447 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:38:28.685463 | orchestrator | Monday 10 February 2025 09:38:23 +0000 (0:00:06.906) 0:04:05.269 ******* 2025-02-10 09:38:28.685478 | orchestrator | =============================================================================== 2025-02-10 09:38:28.685493 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.25s 2025-02-10 09:38:28.685509 | orchestrator | designate : Restart designate-mdns container --------------------------- 18.37s 2025-02-10 09:38:28.685523 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.96s 2025-02-10 09:38:28.685537 | orchestrator | designate : Restart designate-worker container ------------------------- 15.71s 2025-02-10 09:38:28.685551 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.97s 2025-02-10 09:38:28.685565 | orchestrator | designate : Restart designate-producer container ----------------------- 13.70s 2025-02-10 09:38:28.685579 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 12.27s 2025-02-10 09:38:28.685621 | orchestrator | designate : Restart designate-central container ------------------------ 10.79s 2025-02-10 09:38:28.685636 | orchestrator | designate : Check designate containers ---------------------------------- 9.39s 2025-02-10 09:38:28.685650 | orchestrator | designate : Copying over config.json files for services ----------------- 9.18s 2025-02-10 09:38:28.685664 | orchestrator | designate : Restart designate-api container ----------------------------- 9.05s 2025-02-10 09:38:28.685678 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.38s 2025-02-10 09:38:28.685692 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.91s 2025-02-10 09:38:28.685706 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.81s 2025-02-10 09:38:28.685720 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.21s 2025-02-10 09:38:28.685733 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.92s 2025-02-10 09:38:28.685747 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.67s 2025-02-10 09:38:28.685785 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.59s 2025-02-10 09:38:28.685799 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.52s 2025-02-10 09:38:28.685813 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.51s 2025-02-10 09:38:28.685847 | orchestrator | 2025-02-10 09:38:28 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:28.687759 | orchestrator | 2025-02-10 09:38:28 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:28.687797 | orchestrator | 2025-02-10 09:38:28 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:28.687821 | orchestrator | 2025-02-10 09:38:28 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:31.730830 | orchestrator | 2025-02-10 09:38:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:31.730984 | orchestrator | 2025-02-10 09:38:31 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:31.731901 | orchestrator | 2025-02-10 09:38:31 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:31.732823 | orchestrator | 2025-02-10 09:38:31 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:31.733872 | orchestrator | 2025-02-10 09:38:31 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:34.778557 | orchestrator | 2025-02-10 09:38:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:34.778749 | orchestrator | 2025-02-10 09:38:34 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:34.780782 | orchestrator | 2025-02-10 09:38:34 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state STARTED 2025-02-10 09:38:34.782227 | orchestrator | 2025-02-10 09:38:34 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:34.783730 | orchestrator | 2025-02-10 09:38:34 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:37.839200 | orchestrator | 2025-02-10 09:38:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:37.839574 | orchestrator | 2025-02-10 09:38:37 | INFO  | Task fe9e381a-ff67-40f2-836f-c1c364e3d674 is in state STARTED 2025-02-10 09:38:37.841192 | orchestrator | 2025-02-10 09:38:37 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:37.841281 | orchestrator | 2025-02-10 09:38:37 | INFO  | Task a653d8fc-fd28-429f-ad09-f3c694093b68 is in state SUCCESS 2025-02-10 09:38:37.842697 | orchestrator | 2025-02-10 09:38:37.842755 | orchestrator | 2025-02-10 09:38:37.842773 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:38:37.842788 | orchestrator | 2025-02-10 09:38:37.842804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:38:37.842819 | orchestrator | Monday 10 February 2025 09:37:03 +0000 (0:00:00.343) 0:00:00.343 ******* 2025-02-10 09:38:37.842835 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:38:37.842851 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:38:37.842866 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:38:37.842882 | orchestrator | 2025-02-10 09:38:37.842897 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:38:37.842912 | orchestrator | Monday 10 February 2025 09:37:04 +0000 (0:00:00.791) 0:00:01.135 ******* 2025-02-10 09:38:37.842928 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-02-10 09:38:37.842943 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-02-10 09:38:37.842958 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-02-10 09:38:37.843137 | orchestrator | 2025-02-10 09:38:37.843158 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-02-10 09:38:37.843173 | orchestrator | 2025-02-10 09:38:37.843197 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-10 09:38:37.843219 | orchestrator | Monday 10 February 2025 09:37:05 +0000 (0:00:01.038) 0:00:02.173 ******* 2025-02-10 09:38:37.843242 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:38:37.843266 | orchestrator | 2025-02-10 09:38:37.843289 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-02-10 09:38:37.843313 | orchestrator | Monday 10 February 2025 09:37:06 +0000 (0:00:01.149) 0:00:03.323 ******* 2025-02-10 09:38:37.843337 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-02-10 09:38:37.843360 | orchestrator | 2025-02-10 09:38:37.843382 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-02-10 09:38:37.843396 | orchestrator | Monday 10 February 2025 09:37:10 +0000 (0:00:03.451) 0:00:06.775 ******* 2025-02-10 09:38:37.843411 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-02-10 09:38:37.843425 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-02-10 09:38:37.843438 | orchestrator | 2025-02-10 09:38:37.843453 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-02-10 09:38:37.843467 | orchestrator | Monday 10 February 2025 09:37:17 +0000 (0:00:07.783) 0:00:14.558 ******* 2025-02-10 09:38:37.843480 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:38:37.843495 | orchestrator | 2025-02-10 09:38:37.843508 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-02-10 09:38:37.843522 | orchestrator | Monday 10 February 2025 09:37:22 +0000 (0:00:04.183) 0:00:18.744 ******* 2025-02-10 09:38:37.843536 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:38:37.843550 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-02-10 09:38:37.843563 | orchestrator | 2025-02-10 09:38:37.843577 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-02-10 09:38:37.843591 | orchestrator | Monday 10 February 2025 09:37:26 +0000 (0:00:04.677) 0:00:23.421 ******* 2025-02-10 09:38:37.843604 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:38:37.843618 | orchestrator | 2025-02-10 09:38:37.843632 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-02-10 09:38:37.843664 | orchestrator | Monday 10 February 2025 09:37:30 +0000 (0:00:03.847) 0:00:27.269 ******* 2025-02-10 09:38:37.843681 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-02-10 09:38:37.843696 | orchestrator | 2025-02-10 09:38:37.843713 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-10 09:38:37.843737 | orchestrator | Monday 10 February 2025 09:37:34 +0000 (0:00:04.286) 0:00:31.555 ******* 2025-02-10 09:38:37.843760 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:37.843784 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:37.843809 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:37.843834 | orchestrator | 2025-02-10 09:38:37.843859 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-02-10 09:38:37.843882 | orchestrator | Monday 10 February 2025 09:37:35 +0000 (0:00:00.349) 0:00:31.905 ******* 2025-02-10 09:38:37.843901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.843998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.844017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.844033 | orchestrator | 2025-02-10 09:38:37.844213 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-02-10 09:38:37.844230 | orchestrator | Monday 10 February 2025 09:37:36 +0000 (0:00:01.300) 0:00:33.205 ******* 2025-02-10 09:38:37.844245 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:37.844259 | orchestrator | 2025-02-10 09:38:37.844274 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-02-10 09:38:37.844288 | orchestrator | Monday 10 February 2025 09:37:36 +0000 (0:00:00.101) 0:00:33.306 ******* 2025-02-10 09:38:37.844303 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:37.844317 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:37.844334 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:37.844443 | orchestrator | 2025-02-10 09:38:37.844469 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-10 09:38:37.844493 | orchestrator | Monday 10 February 2025 09:37:37 +0000 (0:00:00.718) 0:00:34.025 ******* 2025-02-10 09:38:37.844517 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:38:37.844541 | orchestrator | 2025-02-10 09:38:37.844565 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-02-10 09:38:37.844580 | orchestrator | Monday 10 February 2025 09:37:38 +0000 (0:00:00.608) 0:00:34.634 ******* 2025-02-10 09:38:37.844596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.844664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.844682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.844697 | orchestrator | 2025-02-10 09:38:37.844711 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-02-10 09:38:37.844725 | orchestrator | Monday 10 February 2025 09:37:40 +0000 (0:00:02.061) 0:00:36.695 ******* 2025-02-10 09:38:37.844740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.844755 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:37.844781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.844804 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:37.844826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.844841 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:37.844855 | orchestrator | 2025-02-10 09:38:37.844869 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-02-10 09:38:37.844883 | orchestrator | Monday 10 February 2025 09:37:41 +0000 (0:00:01.227) 0:00:37.923 ******* 2025-02-10 09:38:37.844897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.844911 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:37.844925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.844940 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:37.844954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.844976 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:37.844990 | orchestrator | 2025-02-10 09:38:37.845004 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-02-10 09:38:37.845017 | orchestrator | Monday 10 February 2025 09:37:42 +0000 (0:00:01.151) 0:00:39.074 ******* 2025-02-10 09:38:37.845083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845137 | orchestrator | 2025-02-10 09:38:37.845153 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-02-10 09:38:37.845168 | orchestrator | Monday 10 February 2025 09:37:46 +0000 (0:00:03.667) 0:00:42.741 ******* 2025-02-10 09:38:37.845192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845263 | orchestrator | 2025-02-10 09:38:37.845278 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-02-10 09:38:37.845294 | orchestrator | Monday 10 February 2025 09:37:54 +0000 (0:00:08.640) 0:00:51.382 ******* 2025-02-10 09:38:37.845309 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-10 09:38:37.845325 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-10 09:38:37.845341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-10 09:38:37.845357 | orchestrator | 2025-02-10 09:38:37.845372 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-02-10 09:38:37.845394 | orchestrator | Monday 10 February 2025 09:37:57 +0000 (0:00:02.691) 0:00:54.074 ******* 2025-02-10 09:38:37.845408 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:37.845422 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:37.845449 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:37.845464 | orchestrator | 2025-02-10 09:38:37.845477 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-02-10 09:38:37.845491 | orchestrator | Monday 10 February 2025 09:38:00 +0000 (0:00:02.719) 0:00:56.793 ******* 2025-02-10 09:38:37.845513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.845528 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:38:37.845542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.845557 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:38:37.845591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-10 09:38:37.845607 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:38:37.845621 | orchestrator | 2025-02-10 09:38:37.845635 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-02-10 09:38:37.845648 | orchestrator | Monday 10 February 2025 09:38:01 +0000 (0:00:01.638) 0:00:58.432 ******* 2025-02-10 09:38:37.845663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-10 09:38:37.845714 | orchestrator | 2025-02-10 09:38:37.845728 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-02-10 09:38:37.845742 | orchestrator | Monday 10 February 2025 09:38:05 +0000 (0:00:03.582) 0:01:02.014 ******* 2025-02-10 09:38:37.845756 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:37.845769 | orchestrator | 2025-02-10 09:38:37.845789 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-02-10 09:38:37.845803 | orchestrator | Monday 10 February 2025 09:38:08 +0000 (0:00:02.963) 0:01:04.978 ******* 2025-02-10 09:38:37.845817 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:37.845831 | orchestrator | 2025-02-10 09:38:37.845844 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-02-10 09:38:37.845858 | orchestrator | Monday 10 February 2025 09:38:10 +0000 (0:00:02.243) 0:01:07.221 ******* 2025-02-10 09:38:37.845872 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:37.845886 | orchestrator | 2025-02-10 09:38:37.845899 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-10 09:38:37.845913 | orchestrator | Monday 10 February 2025 09:38:22 +0000 (0:00:12.302) 0:01:19.524 ******* 2025-02-10 09:38:37.845927 | orchestrator | 2025-02-10 09:38:37.845947 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-10 09:38:40.890758 | orchestrator | Monday 10 February 2025 09:38:23 +0000 (0:00:00.175) 0:01:19.699 ******* 2025-02-10 09:38:40.890904 | orchestrator | 2025-02-10 09:38:40.890925 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-10 09:38:40.890940 | orchestrator | Monday 10 February 2025 09:38:23 +0000 (0:00:00.483) 0:01:20.182 ******* 2025-02-10 09:38:40.890955 | orchestrator | 2025-02-10 09:38:40.890970 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-02-10 09:38:40.890985 | orchestrator | Monday 10 February 2025 09:38:23 +0000 (0:00:00.207) 0:01:20.389 ******* 2025-02-10 09:38:40.891000 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:38:40.891017 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:38:40.891107 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:38:40.891131 | orchestrator | 2025-02-10 09:38:40.891147 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:38:40.891162 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:38:40.891178 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:38:40.891192 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:38:40.891206 | orchestrator | 2025-02-10 09:38:40.891220 | orchestrator | 2025-02-10 09:38:40.891234 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:38:40.891248 | orchestrator | Monday 10 February 2025 09:38:34 +0000 (0:00:11.034) 0:01:31.424 ******* 2025-02-10 09:38:40.891263 | orchestrator | =============================================================================== 2025-02-10 09:38:40.891279 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.30s 2025-02-10 09:38:40.891295 | orchestrator | placement : Restart placement-api container ---------------------------- 11.03s 2025-02-10 09:38:40.891311 | orchestrator | placement : Copying over placement.conf --------------------------------- 8.64s 2025-02-10 09:38:40.891327 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.78s 2025-02-10 09:38:40.891342 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.68s 2025-02-10 09:38:40.891357 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.29s 2025-02-10 09:38:40.891373 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.19s 2025-02-10 09:38:40.891389 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.85s 2025-02-10 09:38:40.891404 | orchestrator | placement : Copying over config.json files for services ----------------- 3.67s 2025-02-10 09:38:40.891420 | orchestrator | placement : Check placement containers ---------------------------------- 3.58s 2025-02-10 09:38:40.891435 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.45s 2025-02-10 09:38:40.891451 | orchestrator | placement : Creating placement databases -------------------------------- 2.96s 2025-02-10 09:38:40.891466 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.72s 2025-02-10 09:38:40.891481 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.69s 2025-02-10 09:38:40.891497 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.24s 2025-02-10 09:38:40.891512 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.06s 2025-02-10 09:38:40.891546 | orchestrator | placement : Copying over existing policy file --------------------------- 1.64s 2025-02-10 09:38:40.891562 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.30s 2025-02-10 09:38:40.891578 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.23s 2025-02-10 09:38:40.891592 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.15s 2025-02-10 09:38:40.891606 | orchestrator | 2025-02-10 09:38:37 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:40.891621 | orchestrator | 2025-02-10 09:38:37 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:40.891635 | orchestrator | 2025-02-10 09:38:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:40.891668 | orchestrator | 2025-02-10 09:38:40 | INFO  | Task fe9e381a-ff67-40f2-836f-c1c364e3d674 is in state STARTED 2025-02-10 09:38:40.892272 | orchestrator | 2025-02-10 09:38:40 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:40.893687 | orchestrator | 2025-02-10 09:38:40 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:40.895472 | orchestrator | 2025-02-10 09:38:40 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:43.926444 | orchestrator | 2025-02-10 09:38:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:43.926605 | orchestrator | 2025-02-10 09:38:43 | INFO  | Task fe9e381a-ff67-40f2-836f-c1c364e3d674 is in state SUCCESS 2025-02-10 09:38:43.926742 | orchestrator | 2025-02-10 09:38:43 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:43.927636 | orchestrator | 2025-02-10 09:38:43 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:43.928706 | orchestrator | 2025-02-10 09:38:43 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:43.929439 | orchestrator | 2025-02-10 09:38:43 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:38:46.973164 | orchestrator | 2025-02-10 09:38:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:46.973290 | orchestrator | 2025-02-10 09:38:46 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:46.975269 | orchestrator | 2025-02-10 09:38:46 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:46.975827 | orchestrator | 2025-02-10 09:38:46 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:46.977399 | orchestrator | 2025-02-10 09:38:46 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:38:50.037927 | orchestrator | 2025-02-10 09:38:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:50.038196 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:50.038643 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:50.039851 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:50.040630 | orchestrator | 2025-02-10 09:38:50 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:38:50.040746 | orchestrator | 2025-02-10 09:38:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:53.122369 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:53.122621 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:53.124133 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:53.128457 | orchestrator | 2025-02-10 09:38:53 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:38:56.166229 | orchestrator | 2025-02-10 09:38:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:56.166394 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:59.235084 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:59.235238 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:59.235259 | orchestrator | 2025-02-10 09:38:56 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:38:59.235275 | orchestrator | 2025-02-10 09:38:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:38:59.235356 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:38:59.235551 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:38:59.235587 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:38:59.236322 | orchestrator | 2025-02-10 09:38:59 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:02.299383 | orchestrator | 2025-02-10 09:38:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:02.299544 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:05.345840 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:05.345966 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:05.345979 | orchestrator | 2025-02-10 09:39:02 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:05.345991 | orchestrator | 2025-02-10 09:39:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:05.346101 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:05.346701 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:05.346725 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:05.348949 | orchestrator | 2025-02-10 09:39:05 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:08.403937 | orchestrator | 2025-02-10 09:39:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:08.404168 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:08.404844 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:08.404892 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:08.404919 | orchestrator | 2025-02-10 09:39:08 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:11.456427 | orchestrator | 2025-02-10 09:39:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:11.456692 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:11.457763 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:11.457812 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:11.457836 | orchestrator | 2025-02-10 09:39:11 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:14.551636 | orchestrator | 2025-02-10 09:39:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:14.551818 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:14.556724 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:14.559733 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:14.565016 | orchestrator | 2025-02-10 09:39:14 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:17.616351 | orchestrator | 2025-02-10 09:39:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:17.616528 | orchestrator | 2025-02-10 09:39:17 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:20.665704 | orchestrator | 2025-02-10 09:39:17 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:20.665840 | orchestrator | 2025-02-10 09:39:17 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:20.665858 | orchestrator | 2025-02-10 09:39:17 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:20.665873 | orchestrator | 2025-02-10 09:39:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:20.665906 | orchestrator | 2025-02-10 09:39:20 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:20.666236 | orchestrator | 2025-02-10 09:39:20 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:20.666262 | orchestrator | 2025-02-10 09:39:20 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:20.666281 | orchestrator | 2025-02-10 09:39:20 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:20.666862 | orchestrator | 2025-02-10 09:39:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:23.708358 | orchestrator | 2025-02-10 09:39:23 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:23.708657 | orchestrator | 2025-02-10 09:39:23 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:23.708693 | orchestrator | 2025-02-10 09:39:23 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:23.709377 | orchestrator | 2025-02-10 09:39:23 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:26.769882 | orchestrator | 2025-02-10 09:39:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:26.770131 | orchestrator | 2025-02-10 09:39:26 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:26.770690 | orchestrator | 2025-02-10 09:39:26 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:26.770727 | orchestrator | 2025-02-10 09:39:26 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:26.772241 | orchestrator | 2025-02-10 09:39:26 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:29.832119 | orchestrator | 2025-02-10 09:39:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:29.832291 | orchestrator | 2025-02-10 09:39:29 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:29.832828 | orchestrator | 2025-02-10 09:39:29 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:29.832891 | orchestrator | 2025-02-10 09:39:29 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:29.833576 | orchestrator | 2025-02-10 09:39:29 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:32.876966 | orchestrator | 2025-02-10 09:39:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:32.877187 | orchestrator | 2025-02-10 09:39:32 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:32.878462 | orchestrator | 2025-02-10 09:39:32 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:32.878502 | orchestrator | 2025-02-10 09:39:32 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:32.879846 | orchestrator | 2025-02-10 09:39:32 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:32.881891 | orchestrator | 2025-02-10 09:39:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:35.919169 | orchestrator | 2025-02-10 09:39:35 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:35.919846 | orchestrator | 2025-02-10 09:39:35 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:35.920734 | orchestrator | 2025-02-10 09:39:35 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:35.921789 | orchestrator | 2025-02-10 09:39:35 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:38.964881 | orchestrator | 2025-02-10 09:39:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:38.965035 | orchestrator | 2025-02-10 09:39:38 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:38.965390 | orchestrator | 2025-02-10 09:39:38 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:38.965424 | orchestrator | 2025-02-10 09:39:38 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:38.966344 | orchestrator | 2025-02-10 09:39:38 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:42.020793 | orchestrator | 2025-02-10 09:39:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:42.020942 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:42.021535 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:42.022862 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:42.025216 | orchestrator | 2025-02-10 09:39:42 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:42.029010 | orchestrator | 2025-02-10 09:39:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:45.074569 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:45.076238 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:45.076899 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:45.077519 | orchestrator | 2025-02-10 09:39:45 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:48.143258 | orchestrator | 2025-02-10 09:39:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:48.143425 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:48.144033 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:48.144106 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:48.144725 | orchestrator | 2025-02-10 09:39:48 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:51.206766 | orchestrator | 2025-02-10 09:39:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:51.207093 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:51.208720 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:51.208758 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:51.209548 | orchestrator | 2025-02-10 09:39:51 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:51.211851 | orchestrator | 2025-02-10 09:39:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:54.248893 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:54.250668 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:54.251856 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:54.253132 | orchestrator | 2025-02-10 09:39:54 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:39:57.308800 | orchestrator | 2025-02-10 09:39:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:39:57.308936 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:39:57.309414 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:39:57.309440 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:39:57.310397 | orchestrator | 2025-02-10 09:39:57 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:00.344110 | orchestrator | 2025-02-10 09:39:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:00.344277 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:00.345881 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:00.345950 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:40:00.346964 | orchestrator | 2025-02-10 09:40:00 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:00.347106 | orchestrator | 2025-02-10 09:40:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:03.391451 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:03.400614 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:03.402412 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:40:03.402438 | orchestrator | 2025-02-10 09:40:03 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:06.449616 | orchestrator | 2025-02-10 09:40:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:06.449802 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:06.450377 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:06.451550 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:40:06.453344 | orchestrator | 2025-02-10 09:40:06 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:09.510782 | orchestrator | 2025-02-10 09:40:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:09.510939 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:09.511512 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:09.511548 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state STARTED 2025-02-10 09:40:09.512336 | orchestrator | 2025-02-10 09:40:09 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:12.558632 | orchestrator | 2025-02-10 09:40:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:12.558798 | orchestrator | 2025-02-10 09:40:12 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:12.561120 | orchestrator | 2025-02-10 09:40:12 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:12.561600 | orchestrator | 2025-02-10 09:40:12.561628 | orchestrator | 2025-02-10 09:40:12.561644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:40:12.561660 | orchestrator | 2025-02-10 09:40:12.561675 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:40:12.561691 | orchestrator | Monday 10 February 2025 09:38:39 +0000 (0:00:00.330) 0:00:00.330 ******* 2025-02-10 09:40:12.561706 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:40:12.561796 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:40:12.561812 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:40:12.561827 | orchestrator | 2025-02-10 09:40:12.561851 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:40:12.561875 | orchestrator | Monday 10 February 2025 09:38:40 +0000 (0:00:00.538) 0:00:00.868 ******* 2025-02-10 09:40:12.562777 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-10 09:40:12.562803 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-10 09:40:12.562817 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-10 09:40:12.562831 | orchestrator | 2025-02-10 09:40:12.562845 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-02-10 09:40:12.562859 | orchestrator | 2025-02-10 09:40:12.562874 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-02-10 09:40:12.562889 | orchestrator | Monday 10 February 2025 09:38:41 +0000 (0:00:00.691) 0:00:01.560 ******* 2025-02-10 09:40:12.562903 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:40:12.562917 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:40:12.562931 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:40:12.562945 | orchestrator | 2025-02-10 09:40:12.562959 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:40:12.562974 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:40:12.562990 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:40:12.563016 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:40:12.563031 | orchestrator | 2025-02-10 09:40:12.563045 | orchestrator | 2025-02-10 09:40:12.563099 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:40:12.563116 | orchestrator | Monday 10 February 2025 09:38:42 +0000 (0:00:00.961) 0:00:02.521 ******* 2025-02-10 09:40:12.563130 | orchestrator | =============================================================================== 2025-02-10 09:40:12.563144 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.96s 2025-02-10 09:40:12.563191 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-02-10 09:40:12.563214 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-02-10 09:40:12.563239 | orchestrator | 2025-02-10 09:40:12.563262 | orchestrator | 2025-02-10 09:40:12.563281 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-02-10 09:40:12.563295 | orchestrator | 2025-02-10 09:40:12.563309 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-02-10 09:40:12.563323 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:00.166) 0:00:00.166 ******* 2025-02-10 09:40:12.563337 | orchestrator | changed: [localhost] 2025-02-10 09:40:12.563351 | orchestrator | 2025-02-10 09:40:12.563365 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-02-10 09:40:12.563379 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:00.939) 0:00:01.106 ******* 2025-02-10 09:40:12.563393 | orchestrator | changed: [localhost] 2025-02-10 09:40:12.563407 | orchestrator | 2025-02-10 09:40:12.563424 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-02-10 09:40:12.563447 | orchestrator | Monday 10 February 2025 09:34:49 +0000 (0:00:29.162) 0:00:30.268 ******* 2025-02-10 09:40:12.563469 | orchestrator | changed: [localhost] 2025-02-10 09:40:12.563491 | orchestrator | 2025-02-10 09:40:12.563514 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:40:12.563538 | orchestrator | 2025-02-10 09:40:12.563563 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:40:12.563587 | orchestrator | Monday 10 February 2025 09:34:53 +0000 (0:00:04.044) 0:00:34.313 ******* 2025-02-10 09:40:12.563611 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:40:12.563634 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:40:12.563653 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:40:12.563667 | orchestrator | 2025-02-10 09:40:12.563681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:40:12.563695 | orchestrator | Monday 10 February 2025 09:34:54 +0000 (0:00:00.468) 0:00:34.781 ******* 2025-02-10 09:40:12.563709 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_True) 2025-02-10 09:40:12.563723 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_True) 2025-02-10 09:40:12.563737 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_True) 2025-02-10 09:40:12.563751 | orchestrator | 2025-02-10 09:40:12.563766 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-02-10 09:40:12.563780 | orchestrator | 2025-02-10 09:40:12.563801 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-10 09:40:12.563816 | orchestrator | Monday 10 February 2025 09:34:54 +0000 (0:00:00.854) 0:00:35.636 ******* 2025-02-10 09:40:12.563831 | orchestrator | included: /ansible/roles/ironic/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:40:12.563846 | orchestrator | 2025-02-10 09:40:12.563860 | orchestrator | TASK [service-ks-register : ironic | Creating services] ************************ 2025-02-10 09:40:12.563874 | orchestrator | Monday 10 February 2025 09:34:55 +0000 (0:00:00.783) 0:00:36.419 ******* 2025-02-10 09:40:12.563890 | orchestrator | changed: [testbed-node-0] => (item=ironic (baremetal)) 2025-02-10 09:40:12.563904 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector (baremetal-introspection)) 2025-02-10 09:40:12.563918 | orchestrator | 2025-02-10 09:40:12.563989 | orchestrator | TASK [service-ks-register : ironic | Creating endpoints] *********************** 2025-02-10 09:40:12.564006 | orchestrator | Monday 10 February 2025 09:35:03 +0000 (0:00:07.758) 0:00:44.177 ******* 2025-02-10 09:40:12.564021 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api-int.testbed.osism.xyz:6385 -> internal) 2025-02-10 09:40:12.564035 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api.testbed.osism.xyz:6385 -> public) 2025-02-10 09:40:12.564050 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api-int.testbed.osism.xyz:5050 -> internal) 2025-02-10 09:40:12.564249 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api.testbed.osism.xyz:5050 -> public) 2025-02-10 09:40:12.564284 | orchestrator | 2025-02-10 09:40:12.564299 | orchestrator | TASK [service-ks-register : ironic | Creating projects] ************************ 2025-02-10 09:40:12.564313 | orchestrator | Monday 10 February 2025 09:35:18 +0000 (0:00:15.066) 0:00:59.243 ******* 2025-02-10 09:40:12.564327 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:40:12.564342 | orchestrator | 2025-02-10 09:40:12.564356 | orchestrator | TASK [service-ks-register : ironic | Creating users] *************************** 2025-02-10 09:40:12.564369 | orchestrator | Monday 10 February 2025 09:35:21 +0000 (0:00:02.991) 0:01:02.235 ******* 2025-02-10 09:40:12.564383 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:40:12.564397 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service) 2025-02-10 09:40:12.564418 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service) 2025-02-10 09:40:12.564432 | orchestrator | 2025-02-10 09:40:12.564446 | orchestrator | TASK [service-ks-register : ironic | Creating roles] *************************** 2025-02-10 09:40:12.564460 | orchestrator | Monday 10 February 2025 09:35:29 +0000 (0:00:08.296) 0:01:10.531 ******* 2025-02-10 09:40:12.564474 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:40:12.564488 | orchestrator | 2025-02-10 09:40:12.564502 | orchestrator | TASK [service-ks-register : ironic | Granting user roles] ********************** 2025-02-10 09:40:12.564516 | orchestrator | Monday 10 February 2025 09:35:33 +0000 (0:00:03.634) 0:01:14.166 ******* 2025-02-10 09:40:12.564529 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> admin) 2025-02-10 09:40:12.564543 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> admin) 2025-02-10 09:40:12.564558 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> service) 2025-02-10 09:40:12.564572 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> service) 2025-02-10 09:40:12.564586 | orchestrator | 2025-02-10 09:40:12.564599 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:40:12.564613 | orchestrator | Monday 10 February 2025 09:35:50 +0000 (0:00:16.940) 0:01:31.107 ******* 2025-02-10 09:40:12.564627 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-02-10 09:40:12.564640 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-02-10 09:40:12.564653 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-02-10 09:40:12.564665 | orchestrator | 2025-02-10 09:40:12.564677 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:40:12.564689 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:01.102) 0:01:32.209 ******* 2025-02-10 09:40:12.564701 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-02-10 09:40:12.564713 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-02-10 09:40:12.564726 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-02-10 09:40:12.564738 | orchestrator | 2025-02-10 09:40:12.564750 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:40:12.564762 | orchestrator | Monday 10 February 2025 09:35:54 +0000 (0:00:02.663) 0:01:34.873 ******* 2025-02-10 09:40:12.564775 | orchestrator | skipping: [testbed-node-0] => (item=iscsi_tcp)  2025-02-10 09:40:12.564787 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.564800 | orchestrator | skipping: [testbed-node-1] => (item=iscsi_tcp)  2025-02-10 09:40:12.564812 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.564825 | orchestrator | skipping: [testbed-node-2] => (item=iscsi_tcp)  2025-02-10 09:40:12.564837 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.564849 | orchestrator | 2025-02-10 09:40:12.564861 | orchestrator | TASK [ironic : Ensuring config directories exist] ****************************** 2025-02-10 09:40:12.564873 | orchestrator | Monday 10 February 2025 09:35:56 +0000 (0:00:02.014) 0:01:36.888 ******* 2025-02-10 09:40:12.564887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.565009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.565027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.565042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.565083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.565119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.565199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.565219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.565232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.565246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.565269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.565313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.565329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.565343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.565357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.565373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.565400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.565423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.565438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.565485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.565502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.565517 | orchestrator | 2025-02-10 09:40:12.565532 | orchestrator | TASK [ironic : Check if Ironic policies shall be overwritten] ****************** 2025-02-10 09:40:12.565546 | orchestrator | Monday 10 February 2025 09:36:02 +0000 (0:00:05.874) 0:01:42.763 ******* 2025-02-10 09:40:12.565560 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.565574 | orchestrator | 2025-02-10 09:40:12.565588 | orchestrator | TASK [ironic : Check if Ironic Inspector policies shall be overwritten] ******** 2025-02-10 09:40:12.565602 | orchestrator | Monday 10 February 2025 09:36:02 +0000 (0:00:00.258) 0:01:43.022 ******* 2025-02-10 09:40:12.565616 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.565630 | orchestrator | 2025-02-10 09:40:12.565645 | orchestrator | TASK [ironic : Set ironic policy file] ***************************************** 2025-02-10 09:40:12.565658 | orchestrator | Monday 10 February 2025 09:36:02 +0000 (0:00:00.409) 0:01:43.431 ******* 2025-02-10 09:40:12.565672 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.565686 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.565700 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.565714 | orchestrator | 2025-02-10 09:40:12.565728 | orchestrator | TASK [ironic : Set ironic-inspector policy file] ******************************* 2025-02-10 09:40:12.565748 | orchestrator | Monday 10 February 2025 09:36:03 +0000 (0:00:01.026) 0:01:44.457 ******* 2025-02-10 09:40:12.565763 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.565777 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.565791 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.565820 | orchestrator | 2025-02-10 09:40:12.565834 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-10 09:40:12.565848 | orchestrator | Monday 10 February 2025 09:36:05 +0000 (0:00:01.534) 0:01:45.992 ******* 2025-02-10 09:40:12.565862 | orchestrator | included: /ansible/roles/ironic/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:40:12.565876 | orchestrator | 2025-02-10 09:40:12.565890 | orchestrator | TASK [service-cert-copy : ironic | Copying over extra CA certificates] ********* 2025-02-10 09:40:12.565903 | orchestrator | Monday 10 February 2025 09:36:06 +0000 (0:00:01.155) 0:01:47.147 ******* 2025-02-10 09:40:12.565918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.565976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.565994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.566009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.566096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.566115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.566180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.566199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.566214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.566252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.566268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.566283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.566330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', '2025-02-10 09:40:12 | INFO  | Task 56a1f8f0-3fbd-46c2-b266-24e2a5ce5160 is in state SUCCESS 2025-02-10 09:40:12.566348 | orchestrator | start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.566363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.566378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.566400 | orchestrator | 2025-02-10 09:40:12.566414 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS certificate] *** 2025-02-10 09:40:12.566428 | orchestrator | Monday 10 February 2025 09:36:11 +0000 (0:00:05.500) 0:01:52.648 ******* 2025-02-10 09:40:12.566443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.566469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.566598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.566622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.566637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.566662 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.566692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.566708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.566723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.566771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.566788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.566803 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.566839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.566855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.566870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.566885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.566929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.566946 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.566960 | orchestrator | 2025-02-10 09:40:12.566974 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS key] ****** 2025-02-10 09:40:12.566988 | orchestrator | Monday 10 February 2025 09:36:14 +0000 (0:00:02.676) 0:01:55.324 ******* 2025-02-10 09:40:12.567022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.567037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.567052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.567088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.567139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.567156 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.567184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.567212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.567227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.567242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.567257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.567301 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.567318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.567352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.567367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.567383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.567398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.567412 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.567426 | orchestrator | 2025-02-10 09:40:12.567441 | orchestrator | TASK [ironic : Copying over config.json files for services] ******************** 2025-02-10 09:40:12.567455 | orchestrator | Monday 10 February 2025 09:36:17 +0000 (0:00:02.731) 0:01:58.056 ******* 2025-02-10 09:40:12.567500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.567550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.567567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.567582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.567597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.567648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.567678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.567695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.567710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.567725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.567791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.567809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.567824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.567839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.567854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.567869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.567883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.567935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.567963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.567980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.567994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.568009 | orchestrator | 2025-02-10 09:40:12.568023 | orchestrator | TASK [ironic : Copying over ironic.conf] *************************************** 2025-02-10 09:40:12.568038 | orchestrator | Monday 10 February 2025 09:36:24 +0000 (0:00:07.642) 0:02:05.698 ******* 2025-02-10 09:40:12.568052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.568115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.568186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.568205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.568220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.568235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.568250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.568281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.568330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.568348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.568363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.568378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.568393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.568430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.568453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.568469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.568484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.568498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.568524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.568549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.568563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.568577 | orchestrator | 2025-02-10 09:40:12.568600 | orchestrator | TASK [ironic : Copying over inspector.conf] ************************************ 2025-02-10 09:40:12.568615 | orchestrator | Monday 10 February 2025 09:36:37 +0000 (0:00:12.419) 0:02:18.117 ******* 2025-02-10 09:40:12.568629 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.568643 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.568656 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.568670 | orchestrator | 2025-02-10 09:40:12.568684 | orchestrator | TASK [ironic : Copying over dnsmasq.conf] ************************************** 2025-02-10 09:40:12.568698 | orchestrator | Monday 10 February 2025 09:36:43 +0000 (0:00:05.640) 0:02:23.758 ******* 2025-02-10 09:40:12.568712 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-02-10 09:40:12.568726 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.568740 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-02-10 09:40:12.568754 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.568768 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-02-10 09:40:12.568782 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.568796 | orchestrator | 2025-02-10 09:40:12.568810 | orchestrator | TASK [ironic : Copying pxelinux.cfg default] *********************************** 2025-02-10 09:40:12.568824 | orchestrator | Monday 10 February 2025 09:36:45 +0000 (0:00:02.700) 0:02:26.458 ******* 2025-02-10 09:40:12.568838 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-02-10 09:40:12.568851 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.568865 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-02-10 09:40:12.568879 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.568893 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-02-10 09:40:12.568907 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.568921 | orchestrator | 2025-02-10 09:40:12.568935 | orchestrator | TASK [ironic : Copying ironic-agent kernel and initramfs (PXE)] **************** 2025-02-10 09:40:12.568949 | orchestrator | Monday 10 February 2025 09:36:50 +0000 (0:00:04.462) 0:02:30.921 ******* 2025-02-10 09:40:12.568963 | orchestrator | skipping: [testbed-node-2] => (item=ironic-agent.kernel)  2025-02-10 09:40:12.568976 | orchestrator | skipping: [testbed-node-1] => (item=ironic-agent.kernel)  2025-02-10 09:40:12.568990 | orchestrator | skipping: [testbed-node-0] => (item=ironic-agent.kernel)  2025-02-10 09:40:12.569004 | orchestrator | skipping: [testbed-node-2] => (item=ironic-agent.initramfs)  2025-02-10 09:40:12.569018 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.569039 | orchestrator | skipping: [testbed-node-1] => (item=ironic-agent.initramfs)  2025-02-10 09:40:12.569053 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.569122 | orchestrator | skipping: [testbed-node-0] => (item=ironic-agent.initramfs)  2025-02-10 09:40:12.569137 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.569157 | orchestrator | 2025-02-10 09:40:12.569171 | orchestrator | TASK [ironic : Copying ironic-agent kernel and initramfs (iPXE)] *************** 2025-02-10 09:40:12.569185 | orchestrator | Monday 10 February 2025 09:36:56 +0000 (0:00:06.714) 0:02:37.636 ******* 2025-02-10 09:40:12.569199 | orchestrator | changed: [testbed-node-1] => (item=ironic-agent.kernel) 2025-02-10 09:40:12.569212 | orchestrator | changed: [testbed-node-0] => (item=ironic-agent.kernel) 2025-02-10 09:40:12.569224 | orchestrator | changed: [testbed-node-2] => (item=ironic-agent.kernel) 2025-02-10 09:40:12.569236 | orchestrator | changed: [testbed-node-1] => (item=ironic-agent.initramfs) 2025-02-10 09:40:12.569249 | orchestrator | changed: [testbed-node-0] => (item=ironic-agent.initramfs) 2025-02-10 09:40:12.569261 | orchestrator | changed: [testbed-node-2] => (item=ironic-agent.initramfs) 2025-02-10 09:40:12.569273 | orchestrator | 2025-02-10 09:40:12.569286 | orchestrator | TASK [ironic : Copying inspector.ipxe] ***************************************** 2025-02-10 09:40:12.569298 | orchestrator | Monday 10 February 2025 09:37:11 +0000 (0:00:14.389) 0:02:52.026 ******* 2025-02-10 09:40:12.569310 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-02-10 09:40:12.569323 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-02-10 09:40:12.569335 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-02-10 09:40:12.569347 | orchestrator | 2025-02-10 09:40:12.569360 | orchestrator | TASK [ironic : Copying ironic-http-httpd.conf] ********************************* 2025-02-10 09:40:12.569372 | orchestrator | Monday 10 February 2025 09:37:16 +0000 (0:00:04.801) 0:02:56.827 ******* 2025-02-10 09:40:12.569384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-02-10 09:40:12.569396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-02-10 09:40:12.569409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-02-10 09:40:12.569421 | orchestrator | 2025-02-10 09:40:12.569433 | orchestrator | TASK [ironic : Copying over ironic-prometheus-exporter-wsgi.conf] ************** 2025-02-10 09:40:12.569445 | orchestrator | Monday 10 February 2025 09:37:20 +0000 (0:00:04.701) 0:03:01.530 ******* 2025-02-10 09:40:12.569457 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-02-10 09:40:12.569470 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.569482 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-02-10 09:40:12.569495 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.569513 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-02-10 09:40:12.569526 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.569539 | orchestrator | 2025-02-10 09:40:12.569551 | orchestrator | TASK [ironic : Copying over existing Ironic policy file] *********************** 2025-02-10 09:40:12.569563 | orchestrator | Monday 10 February 2025 09:37:25 +0000 (0:00:04.387) 0:03:05.918 ******* 2025-02-10 09:40:12.569576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.569597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.569610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.569636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.569649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.569674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.569688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.569707 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.569720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.569734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.569757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.569776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.569789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.569808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.569821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.569834 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.569847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.569870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.569889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.569909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.569922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.569935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.569954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.569967 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.569980 | orchestrator | 2025-02-10 09:40:12.569993 | orchestrator | TASK [ironic : Copying over existing Ironic Inspector policy file] ************* 2025-02-10 09:40:12.570005 | orchestrator | Monday 10 February 2025 09:37:26 +0000 (0:00:01.778) 0:03:07.696 ******* 2025-02-10 09:40:12.570054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.570093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.570120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.570146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.570160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.570173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.570186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.570199 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.570218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.570238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.570251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.570275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.570288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.570301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.570314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.570333 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.570353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-10 09:40:12.570366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:40:12.570388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-10 09:40:12.570403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-10 09:40:12.570416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-10 09:40:12.570441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.570455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.570468 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.570480 | orchestrator | 2025-02-10 09:40:12.570493 | orchestrator | TASK [ironic : Copying over ironic-api-wsgi.conf] ****************************** 2025-02-10 09:40:12.570506 | orchestrator | Monday 10 February 2025 09:37:29 +0000 (0:00:02.626) 0:03:10.323 ******* 2025-02-10 09:40:12.570518 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.570530 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.570781 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.570795 | orchestrator | 2025-02-10 09:40:12.570808 | orchestrator | TASK [ironic : Check ironic containers] **************************************** 2025-02-10 09:40:12.570821 | orchestrator | Monday 10 February 2025 09:37:33 +0000 (0:00:03.998) 0:03:14.321 ******* 2025-02-10 09:40:12.570834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.570848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.570861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-10 09:40:12.570889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.570903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.570916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:40:12.570930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.570944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.570970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-10 09:40:12.570984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.570998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.571011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-10 09:40:12.571023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.571042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.571055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.571092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.571106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.571119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.571132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-10 09:40:12.571145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-10 09:40:12.571164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-10 09:40:12.571177 | orchestrator | 2025-02-10 09:40:12.571190 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-10 09:40:12.571203 | orchestrator | Monday 10 February 2025 09:37:38 +0000 (0:00:04.667) 0:03:18.989 ******* 2025-02-10 09:40:12.571215 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:40:12.571228 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:40:12.571240 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:40:12.571253 | orchestrator | 2025-02-10 09:40:12.571265 | orchestrator | TASK [ironic : Creating Ironic database] *************************************** 2025-02-10 09:40:12.571278 | orchestrator | Monday 10 February 2025 09:37:38 +0000 (0:00:00.410) 0:03:19.400 ******* 2025-02-10 09:40:12.571291 | orchestrator | changed: [testbed-node-0] => (item={'database_name': 'ironic', 'group': 'ironic-api'}) 2025-02-10 09:40:12.571303 | orchestrator | changed: [testbed-node-0] => (item={'database_name': 'ironic_inspector', 'group': 'ironic-inspector'}) 2025-02-10 09:40:12.571315 | orchestrator | 2025-02-10 09:40:12.571328 | orchestrator | TASK [ironic : Creating Ironic database user and setting permissions] ********** 2025-02-10 09:40:12.571340 | orchestrator | Monday 10 February 2025 09:37:43 +0000 (0:00:04.649) 0:03:24.051 ******* 2025-02-10 09:40:12.571353 | orchestrator | changed: [testbed-node-0] => (item=ironic) 2025-02-10 09:40:12.571365 | orchestrator | changed: [testbed-node-0] => (item=ironic_inspector) 2025-02-10 09:40:12.571378 | orchestrator | 2025-02-10 09:40:12.571395 | orchestrator | TASK [ironic : Running Ironic bootstrap container] ***************************** 2025-02-10 09:40:12.571408 | orchestrator | Monday 10 February 2025 09:37:50 +0000 (0:00:06.820) 0:03:30.872 ******* 2025-02-10 09:40:12.571420 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571433 | orchestrator | 2025-02-10 09:40:12.571445 | orchestrator | TASK [ironic : Running Ironic Inspector bootstrap container] ******************* 2025-02-10 09:40:12.571457 | orchestrator | Monday 10 February 2025 09:38:09 +0000 (0:00:18.880) 0:03:49.752 ******* 2025-02-10 09:40:12.571470 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571482 | orchestrator | 2025-02-10 09:40:12.571494 | orchestrator | TASK [ironic : Running ironic-tftp bootstrap container] ************************ 2025-02-10 09:40:12.571507 | orchestrator | Monday 10 February 2025 09:38:20 +0000 (0:00:11.872) 0:04:01.625 ******* 2025-02-10 09:40:12.571519 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571531 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.571544 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.571556 | orchestrator | 2025-02-10 09:40:12.571568 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-02-10 09:40:12.571580 | orchestrator | Monday 10 February 2025 09:38:35 +0000 (0:00:14.663) 0:04:16.288 ******* 2025-02-10 09:40:12.571592 | orchestrator | 2025-02-10 09:40:12.571605 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-02-10 09:40:12.571617 | orchestrator | Monday 10 February 2025 09:38:35 +0000 (0:00:00.221) 0:04:16.510 ******* 2025-02-10 09:40:12.571629 | orchestrator | 2025-02-10 09:40:12.571642 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-02-10 09:40:12.571654 | orchestrator | Monday 10 February 2025 09:38:35 +0000 (0:00:00.070) 0:04:16.580 ******* 2025-02-10 09:40:12.571667 | orchestrator | 2025-02-10 09:40:12.571679 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-conductor container] ****************** 2025-02-10 09:40:12.571691 | orchestrator | Monday 10 February 2025 09:38:35 +0000 (0:00:00.057) 0:04:16.638 ******* 2025-02-10 09:40:12.571703 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571722 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.571735 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.571747 | orchestrator | 2025-02-10 09:40:12.571759 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-api container] ************************ 2025-02-10 09:40:12.571771 | orchestrator | Monday 10 February 2025 09:38:56 +0000 (0:00:20.393) 0:04:37.031 ******* 2025-02-10 09:40:12.571784 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571796 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.571808 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.571821 | orchestrator | 2025-02-10 09:40:12.571833 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-inspector container] ****************** 2025-02-10 09:40:12.571845 | orchestrator | Monday 10 February 2025 09:39:10 +0000 (0:00:13.877) 0:04:50.909 ******* 2025-02-10 09:40:12.571858 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571870 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.571882 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.571895 | orchestrator | 2025-02-10 09:40:12.571907 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-tftp container] *********************** 2025-02-10 09:40:12.571919 | orchestrator | Monday 10 February 2025 09:39:28 +0000 (0:00:18.699) 0:05:09.608 ******* 2025-02-10 09:40:12.571932 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.571944 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.571957 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.571969 | orchestrator | 2025-02-10 09:40:12.571986 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-http container] *********************** 2025-02-10 09:40:12.571999 | orchestrator | Monday 10 February 2025 09:39:48 +0000 (0:00:19.619) 0:05:29.228 ******* 2025-02-10 09:40:12.572011 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:40:12.572024 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:40:12.572036 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:40:12.572049 | orchestrator | 2025-02-10 09:40:12.572105 | orchestrator | TASK [ironic : Flush and delete ironic-inspector iptables chain] *************** 2025-02-10 09:40:12.572119 | orchestrator | Monday 10 February 2025 09:40:07 +0000 (0:00:18.551) 0:05:47.779 ******* 2025-02-10 09:40:12.572132 | orchestrator | ok: [testbed-node-0] => (item=flush) 2025-02-10 09:40:12.572144 | orchestrator | ok: [testbed-node-1] => (item=flush) 2025-02-10 09:40:12.572157 | orchestrator | ok: [testbed-node-2] => (item=flush) 2025-02-10 09:40:12.572169 | orchestrator | ok: [testbed-node-1] => (item=delete-chain) 2025-02-10 09:40:12.572182 | orchestrator | ok: [testbed-node-0] => (item=delete-chain) 2025-02-10 09:40:12.572194 | orchestrator | ok: [testbed-node-2] => (item=delete-chain) 2025-02-10 09:40:12.572207 | orchestrator | 2025-02-10 09:40:12.572219 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:40:12.572231 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:40:12.572245 | orchestrator | testbed-node-0 : ok=33  changed=26  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:40:12.572264 | orchestrator | testbed-node-1 : ok=23  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-02-10 09:40:12.572277 | orchestrator | testbed-node-2 : ok=23  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-02-10 09:40:12.572289 | orchestrator | 2025-02-10 09:40:12.572302 | orchestrator | 2025-02-10 09:40:12.572314 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:40:12.572327 | orchestrator | Monday 10 February 2025 09:40:10 +0000 (0:00:03.649) 0:05:51.428 ******* 2025-02-10 09:40:12.572339 | orchestrator | =============================================================================== 2025-02-10 09:40:12.572351 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.16s 2025-02-10 09:40:12.572364 | orchestrator | ironic : Restart ironic-conductor container ---------------------------- 20.39s 2025-02-10 09:40:12.572388 | orchestrator | ironic : Restart ironic-tftp container --------------------------------- 19.62s 2025-02-10 09:40:15.597621 | orchestrator | ironic : Running Ironic bootstrap container ---------------------------- 18.88s 2025-02-10 09:40:15.597756 | orchestrator | ironic : Restart ironic-inspector container ---------------------------- 18.70s 2025-02-10 09:40:15.597775 | orchestrator | ironic : Restart ironic-http container --------------------------------- 18.55s 2025-02-10 09:40:15.597791 | orchestrator | service-ks-register : ironic | Granting user roles --------------------- 16.94s 2025-02-10 09:40:15.597805 | orchestrator | service-ks-register : ironic | Creating endpoints ---------------------- 15.07s 2025-02-10 09:40:15.597819 | orchestrator | ironic : Running ironic-tftp bootstrap container ----------------------- 14.66s 2025-02-10 09:40:15.597833 | orchestrator | ironic : Copying ironic-agent kernel and initramfs (iPXE) -------------- 14.39s 2025-02-10 09:40:15.597847 | orchestrator | ironic : Restart ironic-api container ---------------------------------- 13.88s 2025-02-10 09:40:15.597860 | orchestrator | ironic : Copying over ironic.conf -------------------------------------- 12.42s 2025-02-10 09:40:15.597875 | orchestrator | ironic : Running Ironic Inspector bootstrap container ------------------ 11.87s 2025-02-10 09:40:15.597888 | orchestrator | service-ks-register : ironic | Creating users --------------------------- 8.30s 2025-02-10 09:40:15.597902 | orchestrator | service-ks-register : ironic | Creating services ------------------------ 7.76s 2025-02-10 09:40:15.597916 | orchestrator | ironic : Copying over config.json files for services -------------------- 7.64s 2025-02-10 09:40:15.597929 | orchestrator | ironic : Creating Ironic database user and setting permissions ---------- 6.82s 2025-02-10 09:40:15.597943 | orchestrator | ironic : Copying ironic-agent kernel and initramfs (PXE) ---------------- 6.71s 2025-02-10 09:40:15.597957 | orchestrator | ironic : Ensuring config directories exist ------------------------------ 5.87s 2025-02-10 09:40:15.597971 | orchestrator | ironic : Copying over inspector.conf ------------------------------------ 5.64s 2025-02-10 09:40:15.597985 | orchestrator | 2025-02-10 09:40:12 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:15.598000 | orchestrator | 2025-02-10 09:40:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:15.598121 | orchestrator | 2025-02-10 09:40:15 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:15.598623 | orchestrator | 2025-02-10 09:40:15 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:15.605228 | orchestrator | 2025-02-10 09:40:15 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:15.606095 | orchestrator | 2025-02-10 09:40:15 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:18.635134 | orchestrator | 2025-02-10 09:40:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:18.635395 | orchestrator | 2025-02-10 09:40:18 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:18.635426 | orchestrator | 2025-02-10 09:40:18 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:18.635447 | orchestrator | 2025-02-10 09:40:18 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:18.636168 | orchestrator | 2025-02-10 09:40:18 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:18.636834 | orchestrator | 2025-02-10 09:40:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:21.667347 | orchestrator | 2025-02-10 09:40:21 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:21.667532 | orchestrator | 2025-02-10 09:40:21 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:21.667554 | orchestrator | 2025-02-10 09:40:21 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:21.668218 | orchestrator | 2025-02-10 09:40:21 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:24.719804 | orchestrator | 2025-02-10 09:40:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:24.719940 | orchestrator | 2025-02-10 09:40:24 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:24.720380 | orchestrator | 2025-02-10 09:40:24 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:24.720404 | orchestrator | 2025-02-10 09:40:24 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:24.721206 | orchestrator | 2025-02-10 09:40:24 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:24.724682 | orchestrator | 2025-02-10 09:40:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:27.775594 | orchestrator | 2025-02-10 09:40:27 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:27.777590 | orchestrator | 2025-02-10 09:40:27 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:27.781235 | orchestrator | 2025-02-10 09:40:27 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:27.782568 | orchestrator | 2025-02-10 09:40:27 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:30.827495 | orchestrator | 2025-02-10 09:40:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:30.827653 | orchestrator | 2025-02-10 09:40:30 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:30.828531 | orchestrator | 2025-02-10 09:40:30 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:30.828568 | orchestrator | 2025-02-10 09:40:30 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:30.829671 | orchestrator | 2025-02-10 09:40:30 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:33.871792 | orchestrator | 2025-02-10 09:40:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:33.871929 | orchestrator | 2025-02-10 09:40:33 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:33.872184 | orchestrator | 2025-02-10 09:40:33 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:33.872994 | orchestrator | 2025-02-10 09:40:33 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:33.873732 | orchestrator | 2025-02-10 09:40:33 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:36.900301 | orchestrator | 2025-02-10 09:40:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:36.900515 | orchestrator | 2025-02-10 09:40:36 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:36.901322 | orchestrator | 2025-02-10 09:40:36 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:36.901363 | orchestrator | 2025-02-10 09:40:36 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:36.902134 | orchestrator | 2025-02-10 09:40:36 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:36.902299 | orchestrator | 2025-02-10 09:40:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:39.935743 | orchestrator | 2025-02-10 09:40:39 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:39.935941 | orchestrator | 2025-02-10 09:40:39 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:39.937059 | orchestrator | 2025-02-10 09:40:39 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:39.940667 | orchestrator | 2025-02-10 09:40:39 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:39.942210 | orchestrator | 2025-02-10 09:40:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:42.979869 | orchestrator | 2025-02-10 09:40:42 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:42.980346 | orchestrator | 2025-02-10 09:40:42 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:42.981607 | orchestrator | 2025-02-10 09:40:42 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:42.982165 | orchestrator | 2025-02-10 09:40:42 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:42.982485 | orchestrator | 2025-02-10 09:40:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:46.022879 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:46.023911 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:46.023990 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:46.025515 | orchestrator | 2025-02-10 09:40:46 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:49.066610 | orchestrator | 2025-02-10 09:40:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:49.066883 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:49.068539 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:49.068577 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:49.069387 | orchestrator | 2025-02-10 09:40:49 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:49.069492 | orchestrator | 2025-02-10 09:40:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:52.117897 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:52.119465 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:52.119640 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:52.119665 | orchestrator | 2025-02-10 09:40:52 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:52.119861 | orchestrator | 2025-02-10 09:40:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:55.172054 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:55.172453 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:55.172493 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:55.173191 | orchestrator | 2025-02-10 09:40:55 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:40:58.207772 | orchestrator | 2025-02-10 09:40:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:40:58.207913 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:40:58.209698 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state STARTED 2025-02-10 09:40:58.209737 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:40:58.210268 | orchestrator | 2025-02-10 09:40:58 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:01.250849 | orchestrator | 2025-02-10 09:40:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:01.251057 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:41:01.251388 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:01.251431 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 9c589987-1b89-4915-9880-23c27e9bfb8b is in state SUCCESS 2025-02-10 09:41:01.251456 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:41:01.251482 | orchestrator | 2025-02-10 09:41:01 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:04.279404 | orchestrator | 2025-02-10 09:41:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:04.279604 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:41:04.280019 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:04.280119 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:41:04.280930 | orchestrator | 2025-02-10 09:41:04 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:07.330976 | orchestrator | 2025-02-10 09:41:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:07.331123 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:41:07.331867 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:07.332327 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:41:07.333123 | orchestrator | 2025-02-10 09:41:07 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:10.361177 | orchestrator | 2025-02-10 09:41:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:10.361345 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:41:10.361905 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:10.361942 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state STARTED 2025-02-10 09:41:10.363205 | orchestrator | 2025-02-10 09:41:10 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:13.400548 | orchestrator | 2025-02-10 09:41:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:13.400820 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state STARTED 2025-02-10 09:41:13.401842 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:13.401915 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task 62fa25ee-f3b3-41d8-8243-f575dc58d7ab is in state SUCCESS 2025-02-10 09:41:13.404247 | orchestrator | 2025-02-10 09:41:13.404301 | orchestrator | 2025-02-10 09:41:13.404316 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:41:13.404331 | orchestrator | 2025-02-10 09:41:13.404345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:41:13.404359 | orchestrator | Monday 10 February 2025 09:40:20 +0000 (0:00:00.414) 0:00:00.414 ******* 2025-02-10 09:41:13.404374 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:13.404390 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:13.404404 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:13.404417 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:41:13.404431 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:41:13.404445 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:41:13.404459 | orchestrator | ok: [testbed-manager] 2025-02-10 09:41:13.404473 | orchestrator | 2025-02-10 09:41:13.404487 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:41:13.404520 | orchestrator | Monday 10 February 2025 09:40:22 +0000 (0:00:01.473) 0:00:01.888 ******* 2025-02-10 09:41:13.404535 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404549 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404563 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404577 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404591 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404605 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404619 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-02-10 09:41:13.404633 | orchestrator | 2025-02-10 09:41:13.404647 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-10 09:41:13.404661 | orchestrator | 2025-02-10 09:41:13.404675 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-02-10 09:41:13.404689 | orchestrator | Monday 10 February 2025 09:40:24 +0000 (0:00:01.890) 0:00:03.778 ******* 2025-02-10 09:41:13.404704 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-02-10 09:41:13.404719 | orchestrator | 2025-02-10 09:41:13.404733 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-02-10 09:41:13.404747 | orchestrator | Monday 10 February 2025 09:40:26 +0000 (0:00:01.985) 0:00:05.764 ******* 2025-02-10 09:41:13.404761 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-02-10 09:41:13.404774 | orchestrator | 2025-02-10 09:41:13.404788 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-02-10 09:41:13.404802 | orchestrator | Monday 10 February 2025 09:40:30 +0000 (0:00:04.624) 0:00:10.388 ******* 2025-02-10 09:41:13.404817 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-02-10 09:41:13.404833 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-02-10 09:41:13.404849 | orchestrator | 2025-02-10 09:41:13.404866 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-02-10 09:41:13.404881 | orchestrator | Monday 10 February 2025 09:40:36 +0000 (0:00:05.901) 0:00:16.289 ******* 2025-02-10 09:41:13.404896 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:41:13.404913 | orchestrator | 2025-02-10 09:41:13.404928 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-02-10 09:41:13.404944 | orchestrator | Monday 10 February 2025 09:40:39 +0000 (0:00:02.818) 0:00:19.107 ******* 2025-02-10 09:41:13.404959 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:41:13.404992 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-02-10 09:41:13.405008 | orchestrator | 2025-02-10 09:41:13.405025 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-02-10 09:41:13.405039 | orchestrator | Monday 10 February 2025 09:40:42 +0000 (0:00:03.464) 0:00:22.572 ******* 2025-02-10 09:41:13.405053 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:41:13.405067 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-02-10 09:41:13.405129 | orchestrator | 2025-02-10 09:41:13.405153 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-02-10 09:41:13.405174 | orchestrator | Monday 10 February 2025 09:40:49 +0000 (0:00:06.443) 0:00:29.015 ******* 2025-02-10 09:41:13.405188 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-02-10 09:41:13.405202 | orchestrator | 2025-02-10 09:41:13.405216 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:41:13.405231 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405245 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405260 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405275 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405290 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405314 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405329 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:41:13.405342 | orchestrator | 2025-02-10 09:41:13.405356 | orchestrator | 2025-02-10 09:41:13.405370 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:41:13.405384 | orchestrator | Monday 10 February 2025 09:40:57 +0000 (0:00:08.218) 0:00:37.233 ******* 2025-02-10 09:41:13.405399 | orchestrator | =============================================================================== 2025-02-10 09:41:13.405412 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 8.22s 2025-02-10 09:41:13.405427 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.44s 2025-02-10 09:41:13.405440 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.90s 2025-02-10 09:41:13.405454 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.62s 2025-02-10 09:41:13.405468 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.46s 2025-02-10 09:41:13.405482 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.82s 2025-02-10 09:41:13.405496 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.99s 2025-02-10 09:41:13.405510 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.89s 2025-02-10 09:41:13.405524 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.47s 2025-02-10 09:41:13.405538 | orchestrator | 2025-02-10 09:41:13.405552 | orchestrator | 2025-02-10 09:41:13.405566 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:41:13.405580 | orchestrator | 2025-02-10 09:41:13.405593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:41:13.405607 | orchestrator | Monday 10 February 2025 09:38:29 +0000 (0:00:00.282) 0:00:00.282 ******* 2025-02-10 09:41:13.405621 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:13.405644 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:13.405659 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:13.405673 | orchestrator | 2025-02-10 09:41:13.405694 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:41:13.405708 | orchestrator | Monday 10 February 2025 09:38:29 +0000 (0:00:00.317) 0:00:00.599 ******* 2025-02-10 09:41:13.405723 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-02-10 09:41:13.405737 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-02-10 09:41:13.405756 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-02-10 09:41:13.405770 | orchestrator | 2025-02-10 09:41:13.405784 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-02-10 09:41:13.405798 | orchestrator | 2025-02-10 09:41:13.405812 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-10 09:41:13.405826 | orchestrator | Monday 10 February 2025 09:38:30 +0000 (0:00:00.267) 0:00:00.867 ******* 2025-02-10 09:41:13.405840 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:41:13.405854 | orchestrator | 2025-02-10 09:41:13.405868 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-02-10 09:41:13.405882 | orchestrator | Monday 10 February 2025 09:38:30 +0000 (0:00:00.694) 0:00:01.562 ******* 2025-02-10 09:41:13.405896 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-02-10 09:41:13.405910 | orchestrator | 2025-02-10 09:41:13.405924 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-02-10 09:41:13.405938 | orchestrator | Monday 10 February 2025 09:38:34 +0000 (0:00:04.076) 0:00:05.638 ******* 2025-02-10 09:41:13.405952 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-02-10 09:41:13.405967 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-02-10 09:41:13.405980 | orchestrator | 2025-02-10 09:41:13.405995 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-02-10 09:41:13.406009 | orchestrator | Monday 10 February 2025 09:38:42 +0000 (0:00:07.539) 0:00:13.178 ******* 2025-02-10 09:41:13.406119 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:41:13.406135 | orchestrator | 2025-02-10 09:41:13.406150 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-02-10 09:41:13.406164 | orchestrator | Monday 10 February 2025 09:38:45 +0000 (0:00:03.146) 0:00:16.325 ******* 2025-02-10 09:41:13.406178 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:41:13.406192 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-02-10 09:41:13.406206 | orchestrator | 2025-02-10 09:41:13.406220 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-02-10 09:41:13.406234 | orchestrator | Monday 10 February 2025 09:38:49 +0000 (0:00:04.256) 0:00:20.581 ******* 2025-02-10 09:41:13.406248 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:41:13.406262 | orchestrator | 2025-02-10 09:41:13.406276 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-02-10 09:41:13.406290 | orchestrator | Monday 10 February 2025 09:38:53 +0000 (0:00:03.911) 0:00:24.493 ******* 2025-02-10 09:41:13.406304 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-02-10 09:41:13.406318 | orchestrator | 2025-02-10 09:41:13.406332 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-02-10 09:41:13.406346 | orchestrator | Monday 10 February 2025 09:38:59 +0000 (0:00:05.501) 0:00:29.994 ******* 2025-02-10 09:41:13.406360 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:13.406374 | orchestrator | 2025-02-10 09:41:13.406396 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-02-10 09:41:13.406422 | orchestrator | Monday 10 February 2025 09:39:03 +0000 (0:00:04.127) 0:00:34.121 ******* 2025-02-10 09:41:13.406446 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:13.406460 | orchestrator | 2025-02-10 09:41:13.406475 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-02-10 09:41:13.406489 | orchestrator | Monday 10 February 2025 09:39:08 +0000 (0:00:04.922) 0:00:39.044 ******* 2025-02-10 09:41:13.406503 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:13.406517 | orchestrator | 2025-02-10 09:41:13.406531 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-02-10 09:41:13.406545 | orchestrator | Monday 10 February 2025 09:39:13 +0000 (0:00:04.688) 0:00:43.732 ******* 2025-02-10 09:41:13.406562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.406584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.406601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.406617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.406641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.406667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.406682 | orchestrator | 2025-02-10 09:41:13.406696 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-02-10 09:41:13.406711 | orchestrator | Monday 10 February 2025 09:39:18 +0000 (0:00:05.159) 0:00:48.892 ******* 2025-02-10 09:41:13.406725 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:13.406739 | orchestrator | 2025-02-10 09:41:13.406753 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-02-10 09:41:13.406767 | orchestrator | Monday 10 February 2025 09:39:18 +0000 (0:00:00.360) 0:00:49.253 ******* 2025-02-10 09:41:13.406781 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:13.406795 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:13.406810 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:13.406832 | orchestrator | 2025-02-10 09:41:13.406847 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-02-10 09:41:13.406861 | orchestrator | Monday 10 February 2025 09:39:19 +0000 (0:00:00.797) 0:00:50.050 ******* 2025-02-10 09:41:13.406876 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:41:13.406891 | orchestrator | 2025-02-10 09:41:13.406910 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-02-10 09:41:13.406925 | orchestrator | Monday 10 February 2025 09:39:20 +0000 (0:00:01.395) 0:00:51.445 ******* 2025-02-10 09:41:13.406940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.406955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.406985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407045 | orchestrator | 2025-02-10 09:41:13.407060 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-02-10 09:41:13.407145 | orchestrator | Monday 10 February 2025 09:39:24 +0000 (0:00:03.838) 0:00:55.284 ******* 2025-02-10 09:41:13.407164 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:13.407179 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:13.407193 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:13.407207 | orchestrator | 2025-02-10 09:41:13.407221 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-10 09:41:13.407235 | orchestrator | Monday 10 February 2025 09:39:25 +0000 (0:00:01.175) 0:00:56.460 ******* 2025-02-10 09:41:13.407249 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:41:13.407264 | orchestrator | 2025-02-10 09:41:13.407277 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-02-10 09:41:13.407291 | orchestrator | Monday 10 February 2025 09:39:29 +0000 (0:00:03.550) 0:01:00.010 ******* 2025-02-10 09:41:13.407316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407427 | orchestrator | 2025-02-10 09:41:13.407440 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-02-10 09:41:13.407453 | orchestrator | Monday 10 February 2025 09:39:36 +0000 (0:00:07.605) 0:01:07.615 ******* 2025-02-10 09:41:13.407466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.407481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.407494 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:13.407507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.407527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.407541 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:13.407559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.407573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.407586 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:13.407599 | orchestrator | 2025-02-10 09:41:13.407612 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-02-10 09:41:13.407624 | orchestrator | Monday 10 February 2025 09:39:42 +0000 (0:00:05.057) 0:01:12.673 ******* 2025-02-10 09:41:13.407637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.407671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.407684 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:13.407703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.407716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.407729 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:13.407743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.407756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.407772 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:13.407785 | orchestrator | 2025-02-10 09:41:13.407797 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-02-10 09:41:13.407809 | orchestrator | Monday 10 February 2025 09:39:45 +0000 (0:00:03.828) 0:01:16.502 ******* 2025-02-10 09:41:13.407822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.407914 | orchestrator | 2025-02-10 09:41:13.407926 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-02-10 09:41:13.407939 | orchestrator | Monday 10 February 2025 09:39:51 +0000 (0:00:05.759) 0:01:22.261 ******* 2025-02-10 09:41:13.407959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.407992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.408005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.408019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.408038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.408051 | orchestrator | 2025-02-10 09:41:13.408064 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-02-10 09:41:13.408099 | orchestrator | Monday 10 February 2025 09:40:08 +0000 (0:00:16.602) 0:01:38.863 ******* 2025-02-10 09:41:13.408113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.408133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.408147 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:13.408160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.408173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.408186 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:13.408206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-10 09:41:13.408224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:41:13.408255 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:13.408277 | orchestrator | 2025-02-10 09:41:13.408298 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-02-10 09:41:13.408310 | orchestrator | Monday 10 February 2025 09:40:12 +0000 (0:00:03.977) 0:01:42.841 ******* 2025-02-10 09:41:13.408324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.408337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.408356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-10 09:41:13.408370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.408390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.408403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:13.408416 | orchestrator | 2025-02-10 09:41:13.408428 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-10 09:41:13.408441 | orchestrator | Monday 10 February 2025 09:40:16 +0000 (0:00:04.106) 0:01:46.947 ******* 2025-02-10 09:41:13.408454 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:13.408466 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:13.408479 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:13.408492 | orchestrator | 2025-02-10 09:41:13.408504 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-02-10 09:41:13.408516 | orchestrator | Monday 10 February 2025 09:40:17 +0000 (0:00:01.103) 0:01:48.051 ******* 2025-02-10 09:41:13.408529 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:13.408541 | orchestrator | 2025-02-10 09:41:13.408554 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-02-10 09:41:13.408566 | orchestrator | Monday 10 February 2025 09:40:19 +0000 (0:00:02.273) 0:01:50.325 ******* 2025-02-10 09:41:13.408579 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:13.408591 | orchestrator | 2025-02-10 09:41:13.408604 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-02-10 09:41:13.408617 | orchestrator | Monday 10 February 2025 09:40:21 +0000 (0:00:02.114) 0:01:52.440 ******* 2025-02-10 09:41:13.408629 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:13.408641 | orchestrator | 2025-02-10 09:41:13.408654 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-10 09:41:13.408667 | orchestrator | Monday 10 February 2025 09:40:36 +0000 (0:00:15.180) 0:02:07.620 ******* 2025-02-10 09:41:13.408679 | orchestrator | 2025-02-10 09:41:13.408692 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-10 09:41:13.408704 | orchestrator | Monday 10 February 2025 09:40:37 +0000 (0:00:00.146) 0:02:07.767 ******* 2025-02-10 09:41:13.408716 | orchestrator | 2025-02-10 09:41:13.408735 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-10 09:41:13.408748 | orchestrator | Monday 10 February 2025 09:40:37 +0000 (0:00:00.041) 0:02:07.808 ******* 2025-02-10 09:41:13.408767 | orchestrator | 2025-02-10 09:41:13.408780 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-02-10 09:41:13.408798 | orchestrator | Monday 10 February 2025 09:40:37 +0000 (0:00:00.066) 0:02:07.875 ******* 2025-02-10 09:41:16.445485 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.445622 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:16.445641 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:16.445657 | orchestrator | 2025-02-10 09:41:16.445673 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-02-10 09:41:16.445689 | orchestrator | Monday 10 February 2025 09:40:52 +0000 (0:00:14.955) 0:02:22.830 ******* 2025-02-10 09:41:16.445703 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.445717 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:16.445731 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:16.445746 | orchestrator | 2025-02-10 09:41:16.445760 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:41:16.445775 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-10 09:41:16.445791 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:41:16.445805 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:41:16.445819 | orchestrator | 2025-02-10 09:41:16.445833 | orchestrator | 2025-02-10 09:41:16.445847 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:41:16.445861 | orchestrator | Monday 10 February 2025 09:41:12 +0000 (0:00:20.376) 0:02:43.206 ******* 2025-02-10 09:41:16.445876 | orchestrator | =============================================================================== 2025-02-10 09:41:16.445890 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 20.38s 2025-02-10 09:41:16.445904 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 16.60s 2025-02-10 09:41:16.445917 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.18s 2025-02-10 09:41:16.445931 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.96s 2025-02-10 09:41:16.445945 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 7.61s 2025-02-10 09:41:16.445958 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.54s 2025-02-10 09:41:16.445972 | orchestrator | magnum : Copying over config.json files for services -------------------- 5.76s 2025-02-10 09:41:16.445987 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 5.50s 2025-02-10 09:41:16.446002 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 5.16s 2025-02-10 09:41:16.446104 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 5.06s 2025-02-10 09:41:16.446126 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.92s 2025-02-10 09:41:16.446142 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.69s 2025-02-10 09:41:16.446157 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.26s 2025-02-10 09:41:16.446173 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.13s 2025-02-10 09:41:16.446189 | orchestrator | magnum : Check magnum containers ---------------------------------------- 4.11s 2025-02-10 09:41:16.446204 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.08s 2025-02-10 09:41:16.446219 | orchestrator | magnum : Copying over existing policy file ------------------------------ 3.98s 2025-02-10 09:41:16.446235 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.91s 2025-02-10 09:41:16.446250 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.84s 2025-02-10 09:41:16.446300 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.83s 2025-02-10 09:41:16.446317 | orchestrator | 2025-02-10 09:41:13 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:16.446333 | orchestrator | 2025-02-10 09:41:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:16.446368 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:16.456498 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task c58a3eeb-bf35-47e7-9ad5-e8d02bcd9fd8 is in state SUCCESS 2025-02-10 09:41:16.458403 | orchestrator | 2025-02-10 09:41:16.458815 | orchestrator | 2025-02-10 09:41:16.458857 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:41:16.458877 | orchestrator | 2025-02-10 09:41:16.458894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:41:16.458933 | orchestrator | Monday 10 February 2025 09:34:17 +0000 (0:00:00.301) 0:00:00.301 ******* 2025-02-10 09:41:16.458953 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:16.458973 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:16.458991 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:16.459008 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:41:16.459026 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:41:16.459044 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:41:16.459062 | orchestrator | 2025-02-10 09:41:16.459116 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:41:16.459136 | orchestrator | Monday 10 February 2025 09:34:19 +0000 (0:00:01.298) 0:00:01.599 ******* 2025-02-10 09:41:16.459154 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-02-10 09:41:16.459172 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-02-10 09:41:16.459189 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-02-10 09:41:16.459206 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-02-10 09:41:16.459225 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-02-10 09:41:16.459243 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-02-10 09:41:16.459259 | orchestrator | 2025-02-10 09:41:16.459277 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-02-10 09:41:16.459295 | orchestrator | 2025-02-10 09:41:16.459312 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:41:16.459329 | orchestrator | Monday 10 February 2025 09:34:20 +0000 (0:00:01.496) 0:00:03.096 ******* 2025-02-10 09:41:16.459349 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:41:16.459368 | orchestrator | 2025-02-10 09:41:16.459385 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-02-10 09:41:16.459888 | orchestrator | Monday 10 February 2025 09:34:22 +0000 (0:00:02.278) 0:00:05.374 ******* 2025-02-10 09:41:16.459905 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:16.459916 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:16.459926 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:16.459936 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:41:16.459946 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:41:16.459957 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:41:16.459967 | orchestrator | 2025-02-10 09:41:16.459977 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-02-10 09:41:16.459988 | orchestrator | Monday 10 February 2025 09:34:24 +0000 (0:00:01.721) 0:00:07.096 ******* 2025-02-10 09:41:16.459998 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:16.460008 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:16.460017 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:16.460028 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:41:16.460038 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:41:16.460048 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:41:16.460058 | orchestrator | 2025-02-10 09:41:16.460138 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-02-10 09:41:16.460158 | orchestrator | Monday 10 February 2025 09:34:26 +0000 (0:00:01.515) 0:00:08.612 ******* 2025-02-10 09:41:16.460170 | orchestrator | ok: [testbed-node-0] => { 2025-02-10 09:41:16.460181 | orchestrator |  "changed": false, 2025-02-10 09:41:16.460191 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:41:16.460201 | orchestrator | } 2025-02-10 09:41:16.460211 | orchestrator | ok: [testbed-node-1] => { 2025-02-10 09:41:16.460221 | orchestrator |  "changed": false, 2025-02-10 09:41:16.460231 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:41:16.460241 | orchestrator | } 2025-02-10 09:41:16.460252 | orchestrator | ok: [testbed-node-2] => { 2025-02-10 09:41:16.460272 | orchestrator |  "changed": false, 2025-02-10 09:41:16.460282 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:41:16.460293 | orchestrator | } 2025-02-10 09:41:16.460304 | orchestrator | ok: [testbed-node-3] => { 2025-02-10 09:41:16.460314 | orchestrator |  "changed": false, 2025-02-10 09:41:16.460324 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:41:16.460334 | orchestrator | } 2025-02-10 09:41:16.460345 | orchestrator | ok: [testbed-node-4] => { 2025-02-10 09:41:16.460355 | orchestrator |  "changed": false, 2025-02-10 09:41:16.460365 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:41:16.460382 | orchestrator | } 2025-02-10 09:41:16.460398 | orchestrator | ok: [testbed-node-5] => { 2025-02-10 09:41:16.460414 | orchestrator |  "changed": false, 2025-02-10 09:41:16.460429 | orchestrator |  "msg": "All assertions passed" 2025-02-10 09:41:16.460443 | orchestrator | } 2025-02-10 09:41:16.460460 | orchestrator | 2025-02-10 09:41:16.460476 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-02-10 09:41:16.460494 | orchestrator | Monday 10 February 2025 09:34:27 +0000 (0:00:01.092) 0:00:09.705 ******* 2025-02-10 09:41:16.460510 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.460526 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.460536 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.460547 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.460557 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.460567 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.460577 | orchestrator | 2025-02-10 09:41:16.460587 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-02-10 09:41:16.460597 | orchestrator | Monday 10 February 2025 09:34:28 +0000 (0:00:00.854) 0:00:10.559 ******* 2025-02-10 09:41:16.460607 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-02-10 09:41:16.460618 | orchestrator | 2025-02-10 09:41:16.460941 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-02-10 09:41:16.460960 | orchestrator | Monday 10 February 2025 09:34:32 +0000 (0:00:03.868) 0:00:14.428 ******* 2025-02-10 09:41:16.460970 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-02-10 09:41:16.460982 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-02-10 09:41:16.460992 | orchestrator | 2025-02-10 09:41:16.461035 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-02-10 09:41:16.461048 | orchestrator | Monday 10 February 2025 09:34:39 +0000 (0:00:07.664) 0:00:22.092 ******* 2025-02-10 09:41:16.461058 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:41:16.461068 | orchestrator | 2025-02-10 09:41:16.461130 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-02-10 09:41:16.461142 | orchestrator | Monday 10 February 2025 09:34:43 +0000 (0:00:03.528) 0:00:25.621 ******* 2025-02-10 09:41:16.461152 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:41:16.461162 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-02-10 09:41:16.461172 | orchestrator | 2025-02-10 09:41:16.461182 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-02-10 09:41:16.461212 | orchestrator | Monday 10 February 2025 09:34:47 +0000 (0:00:04.192) 0:00:29.813 ******* 2025-02-10 09:41:16.461224 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:41:16.461242 | orchestrator | 2025-02-10 09:41:16.461258 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-02-10 09:41:16.461274 | orchestrator | Monday 10 February 2025 09:34:51 +0000 (0:00:03.586) 0:00:33.400 ******* 2025-02-10 09:41:16.461289 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-02-10 09:41:16.461306 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-02-10 09:41:16.461322 | orchestrator | 2025-02-10 09:41:16.461339 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:41:16.461356 | orchestrator | Monday 10 February 2025 09:35:00 +0000 (0:00:09.654) 0:00:43.054 ******* 2025-02-10 09:41:16.461374 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.461391 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.461404 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.461414 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.461424 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.461434 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.461444 | orchestrator | 2025-02-10 09:41:16.461454 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-02-10 09:41:16.461464 | orchestrator | Monday 10 February 2025 09:35:02 +0000 (0:00:01.484) 0:00:44.539 ******* 2025-02-10 09:41:16.461474 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.461484 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.461494 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.461510 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.461527 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.461543 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.461558 | orchestrator | 2025-02-10 09:41:16.461574 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-02-10 09:41:16.461590 | orchestrator | Monday 10 February 2025 09:35:07 +0000 (0:00:05.812) 0:00:50.351 ******* 2025-02-10 09:41:16.461607 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:41:16.461626 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:41:16.461643 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:41:16.461661 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:41:16.461679 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:41:16.461695 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:41:16.461711 | orchestrator | 2025-02-10 09:41:16.461738 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-10 09:41:16.461751 | orchestrator | Monday 10 February 2025 09:35:10 +0000 (0:00:02.254) 0:00:52.606 ******* 2025-02-10 09:41:16.461762 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.461774 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.461786 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.461797 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.461809 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.461820 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.461831 | orchestrator | 2025-02-10 09:41:16.461842 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-02-10 09:41:16.461854 | orchestrator | Monday 10 February 2025 09:35:16 +0000 (0:00:06.381) 0:00:58.987 ******* 2025-02-10 09:41:16.461869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.462192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.462278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.462395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.462413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.462477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.462686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.462712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.462732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.462917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.462936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.463358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.463415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.463538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.463564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.463582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.463601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.463630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.463649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.463857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.463882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.464377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.464410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.464495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.464669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.464952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.464977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.465203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.465245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.465269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.465282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.465388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.465691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.465812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.465822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.465850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.465860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.465916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.465929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.465968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.466230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.466424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.466441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.466462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.466472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.466489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.466499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.466578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.466593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.466611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.466620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.466659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.466734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.466755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.467253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.467288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.467304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.467319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.467435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.467457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.467485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.467500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.467513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.467527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.467540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.467614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.467636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.467646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.467655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.467663 | orchestrator | 2025-02-10 09:41:16.467671 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-02-10 09:41:16.467680 | orchestrator | Monday 10 February 2025 09:35:20 +0000 (0:00:04.009) 0:01:02.997 ******* 2025-02-10 09:41:16.467688 | orchestrator | [WARNING]: Skipped 2025-02-10 09:41:16.467696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-02-10 09:41:16.467705 | orchestrator | due to this access issue: 2025-02-10 09:41:16.467714 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-02-10 09:41:16.467722 | orchestrator | a directory 2025-02-10 09:41:16.467730 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:41:16.467738 | orchestrator | 2025-02-10 09:41:16.467746 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:41:16.467754 | orchestrator | Monday 10 February 2025 09:35:21 +0000 (0:00:00.985) 0:01:03.982 ******* 2025-02-10 09:41:16.467763 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:41:16.467777 | orchestrator | 2025-02-10 09:41:16.467790 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-02-10 09:41:16.467802 | orchestrator | Monday 10 February 2025 09:35:24 +0000 (0:00:02.473) 0:01:06.455 ******* 2025-02-10 09:41:16.468218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.468258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.468268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.468276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.468286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.468346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.468366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.468543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.468556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.468565 | orchestrator | 2025-02-10 09:41:16.468574 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-02-10 09:41:16.468583 | orchestrator | Monday 10 February 2025 09:35:31 +0000 (0:00:06.938) 0:01:13.394 ******* 2025-02-10 09:41:16.468593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.468644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.468735 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.468746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.468755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.468763 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.468772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.469090 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.469110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.469119 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.469183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.469206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.469215 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.469223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.469231 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.469239 | orchestrator | 2025-02-10 09:41:16.469247 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-02-10 09:41:16.469256 | orchestrator | Monday 10 February 2025 09:35:37 +0000 (0:00:06.581) 0:01:19.975 ******* 2025-02-10 09:41:16.469264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.469273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.469285 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.469334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.469346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.469355 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.469370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.469379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.469387 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.469396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.469409 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.469450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.469459 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.469513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.469524 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.469533 | orchestrator | 2025-02-10 09:41:16.469542 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-02-10 09:41:16.469550 | orchestrator | Monday 10 February 2025 09:35:44 +0000 (0:00:07.204) 0:01:27.180 ******* 2025-02-10 09:41:16.469558 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.469566 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.469574 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.469582 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.469590 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.469606 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.469615 | orchestrator | 2025-02-10 09:41:16.469623 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-02-10 09:41:16.469631 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:06.314) 0:01:33.494 ******* 2025-02-10 09:41:16.469639 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.469647 | orchestrator | 2025-02-10 09:41:16.469655 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-02-10 09:41:16.469663 | orchestrator | Monday 10 February 2025 09:35:51 +0000 (0:00:00.131) 0:01:33.626 ******* 2025-02-10 09:41:16.469671 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.469678 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.469686 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.469694 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.469702 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.469710 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.469718 | orchestrator | 2025-02-10 09:41:16.469726 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-02-10 09:41:16.469734 | orchestrator | Monday 10 February 2025 09:35:53 +0000 (0:00:02.690) 0:01:36.317 ******* 2025-02-10 09:41:16.474281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.474341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.474427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.474491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.474521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.474539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.474570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.474592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474609 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.474624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.474641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.474702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.474724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.474758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.474779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474788 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.474797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.474811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.474851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.474910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.474943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.474963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.474972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.474981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.474989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475004 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.475017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.475030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.475068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.475176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.475213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.475235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.475289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475310 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.475322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.475373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.475416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475433 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.475451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.475471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.475505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.475580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.475628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475645 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.475653 | orchestrator | 2025-02-10 09:41:16.475662 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-02-10 09:41:16.475670 | orchestrator | Monday 10 February 2025 09:36:01 +0000 (0:00:07.096) 0:01:43.413 ******* 2025-02-10 09:41:16.475679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.475691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.475737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.475794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.475843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.475890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.475912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.475928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.475937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.475958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.475989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.475998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.476196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.476215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.476458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.476552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.476574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.476605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.476622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.476706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.476714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.476778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.476823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.476925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.476934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.476943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.476958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.477008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.477026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.477035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.477061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.477070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.477166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.477175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.477184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.477192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.477201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.477297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.477306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.477315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.477365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.477410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.477418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.477489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.477502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.477511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477520 | orchestrator | 2025-02-10 09:41:16.477528 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-02-10 09:41:16.477536 | orchestrator | Monday 10 February 2025 09:36:07 +0000 (0:00:06.665) 0:01:50.079 ******* 2025-02-10 09:41:16.477545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.477593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.477646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.477663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.477724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.477754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.477846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.477864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.477872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.477934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.477981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.477990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.478144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.478232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.478291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.478353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.478370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.478464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.478570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.478590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.478622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.478640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.478720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.478734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.478792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.478844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.478931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.478954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.478962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.478971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.479021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.479041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.479063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.479099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.479150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.479180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.479203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.479211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.479284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.479292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.479315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.479334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.479385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.479412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.479421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.479438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.479502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.479516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.479524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479533 | orchestrator | 2025-02-10 09:41:16.479541 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-02-10 09:41:16.479549 | orchestrator | Monday 10 February 2025 09:36:19 +0000 (0:00:11.578) 0:02:01.658 ******* 2025-02-10 09:41:16.479566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.479617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.479661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.479743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.479760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.479777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.479854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.479873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.479881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.479890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.479941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.479976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.479993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.480121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.480179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.480199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.480318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.480339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480433 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.480442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.480588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.480605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.480661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.480682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.480771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.480818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.480831 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.480880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.480902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.480911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.480995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.481048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.481110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.481250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.481259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.481281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.481366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.481381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481398 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.481406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.481415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.481493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.481507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481516 | orchestrator | 2025-02-10 09:41:16.481524 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-02-10 09:41:16.481536 | orchestrator | Monday 10 February 2025 09:36:25 +0000 (0:00:06.571) 0:02:08.229 ******* 2025-02-10 09:41:16.481544 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.481552 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.481561 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.481569 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:16.481577 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:16.481585 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.481593 | orchestrator | 2025-02-10 09:41:16.481601 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-02-10 09:41:16.481609 | orchestrator | Monday 10 February 2025 09:36:36 +0000 (0:00:10.333) 0:02:18.563 ******* 2025-02-10 09:41:16.481661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.481673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.481714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.481822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.481840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.481901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.481914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.481928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.481937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.481954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482040 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.482051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.482126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.482240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.482258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.482344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.482352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.482379 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.482388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.482470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.482572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.482589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.482674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.482683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482692 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.482708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.482717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.482813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.482913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.482931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.482940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.482955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.483011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.483023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.483050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.483186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.483204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.483213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.483277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.483296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.483304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.483331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.483385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.483406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.483498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.483519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.483528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.483550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.483625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.483634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.483660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.483674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.483682 | orchestrator | 2025-02-10 09:41:16.483690 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-02-10 09:41:16.483699 | orchestrator | Monday 10 February 2025 09:36:41 +0000 (0:00:05.188) 0:02:23.751 ******* 2025-02-10 09:41:16.483707 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.483715 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.483723 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.483731 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.483739 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.483747 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.483755 | orchestrator | 2025-02-10 09:41:16.483763 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-02-10 09:41:16.483771 | orchestrator | Monday 10 February 2025 09:36:44 +0000 (0:00:02.726) 0:02:26.478 ******* 2025-02-10 09:41:16.483830 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.483842 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.483850 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.483859 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.483867 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.483875 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.483883 | orchestrator | 2025-02-10 09:41:16.483891 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-02-10 09:41:16.483899 | orchestrator | Monday 10 February 2025 09:36:49 +0000 (0:00:05.846) 0:02:32.325 ******* 2025-02-10 09:41:16.483907 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.483916 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.483923 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.483932 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.483944 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.483952 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.483960 | orchestrator | 2025-02-10 09:41:16.483969 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-02-10 09:41:16.483977 | orchestrator | Monday 10 February 2025 09:36:55 +0000 (0:00:05.635) 0:02:37.960 ******* 2025-02-10 09:41:16.483985 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.483993 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.484001 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.484009 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.484017 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.484025 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.484033 | orchestrator | 2025-02-10 09:41:16.484041 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-02-10 09:41:16.484049 | orchestrator | Monday 10 February 2025 09:37:00 +0000 (0:00:04.699) 0:02:42.660 ******* 2025-02-10 09:41:16.484057 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.484066 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.484089 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.484097 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.484111 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.484120 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.484128 | orchestrator | 2025-02-10 09:41:16.484136 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-02-10 09:41:16.484144 | orchestrator | Monday 10 February 2025 09:37:03 +0000 (0:00:02.949) 0:02:45.610 ******* 2025-02-10 09:41:16.484152 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.484160 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.484168 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.484176 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.484184 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.484192 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.484200 | orchestrator | 2025-02-10 09:41:16.484208 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-02-10 09:41:16.484217 | orchestrator | Monday 10 February 2025 09:37:07 +0000 (0:00:03.977) 0:02:49.587 ******* 2025-02-10 09:41:16.484225 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:41:16.484233 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.484241 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:41:16.484249 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.484257 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:41:16.484265 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.484274 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:41:16.484282 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.484290 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:41:16.484298 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.484306 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-10 09:41:16.484314 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.484322 | orchestrator | 2025-02-10 09:41:16.484330 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-02-10 09:41:16.484338 | orchestrator | Monday 10 February 2025 09:37:10 +0000 (0:00:03.311) 0:02:52.898 ******* 2025-02-10 09:41:16.484347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.484404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.484460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.484518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.484535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.484553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.484579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.484588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.484666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.484675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484684 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.484692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.484701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.484794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.484811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.484858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.484885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.484911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.484920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.484972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.485001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.485010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485019 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.485027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.485036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.485144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.485238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.485265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.485344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.485357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485366 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.485374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.485383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.485480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.485573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.485605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.485635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.485693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485706 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.485715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.485723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.485817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.485872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.485956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.485977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.485992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.486045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.486055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486090 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.486151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.486164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.486256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.486311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.486370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.486415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.486424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486439 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.486448 | orchestrator | 2025-02-10 09:41:16.486456 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-02-10 09:41:16.486465 | orchestrator | Monday 10 February 2025 09:37:15 +0000 (0:00:05.082) 0:02:57.981 ******* 2025-02-10 09:41:16.486515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.486527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.486568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.486673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.486691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.486782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.486791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486799 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.486817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.486872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.486916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.486925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.486985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.487018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.487035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.487172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.487181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487190 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.487204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.487213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.487313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.487415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.487432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.487517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.487526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.487544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487633 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.487642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.487650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.487752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.487769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.487852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.487864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487873 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.487889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.487899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.487982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.487991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.487999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.488008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.488093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.488115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.488124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.488156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.488183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488193 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.488202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.488211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.488278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.488296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.488304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.488337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.488374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.488382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.488411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.488427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.488436 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488445 | orchestrator | 2025-02-10 09:41:16.488453 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-02-10 09:41:16.488461 | orchestrator | Monday 10 February 2025 09:37:20 +0000 (0:00:04.460) 0:03:02.441 ******* 2025-02-10 09:41:16.488469 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.488478 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.488486 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.488494 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.488503 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.488511 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488519 | orchestrator | 2025-02-10 09:41:16.488546 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-02-10 09:41:16.488557 | orchestrator | Monday 10 February 2025 09:37:25 +0000 (0:00:05.690) 0:03:08.131 ******* 2025-02-10 09:41:16.488566 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.488574 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.488582 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.488590 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:41:16.488599 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:41:16.488607 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:41:16.488615 | orchestrator | 2025-02-10 09:41:16.488624 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-02-10 09:41:16.488632 | orchestrator | Monday 10 February 2025 09:37:35 +0000 (0:00:10.084) 0:03:18.216 ******* 2025-02-10 09:41:16.488640 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.488649 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.488659 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.488667 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.488676 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.488684 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488692 | orchestrator | 2025-02-10 09:41:16.488701 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-02-10 09:41:16.488709 | orchestrator | Monday 10 February 2025 09:37:38 +0000 (0:00:02.537) 0:03:20.753 ******* 2025-02-10 09:41:16.488717 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.488726 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.488734 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.488743 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.488753 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.488761 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488771 | orchestrator | 2025-02-10 09:41:16.488780 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-02-10 09:41:16.488795 | orchestrator | Monday 10 February 2025 09:37:41 +0000 (0:00:03.247) 0:03:24.001 ******* 2025-02-10 09:41:16.488804 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:16.488813 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.488822 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.488831 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488839 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.488848 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:16.488857 | orchestrator | 2025-02-10 09:41:16.488866 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-02-10 09:41:16.488879 | orchestrator | Monday 10 February 2025 09:37:52 +0000 (0:00:11.002) 0:03:35.004 ******* 2025-02-10 09:41:16.488888 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.488897 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.488906 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.488915 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.488924 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.488933 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488942 | orchestrator | 2025-02-10 09:41:16.488950 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-02-10 09:41:16.488959 | orchestrator | Monday 10 February 2025 09:37:57 +0000 (0:00:04.460) 0:03:39.464 ******* 2025-02-10 09:41:16.488968 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.488977 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.488986 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.488995 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.489004 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.489013 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.489022 | orchestrator | 2025-02-10 09:41:16.489032 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-02-10 09:41:16.489041 | orchestrator | Monday 10 February 2025 09:38:00 +0000 (0:00:03.136) 0:03:42.601 ******* 2025-02-10 09:41:16.489050 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.489059 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.489068 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.489091 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.489099 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.489107 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.489115 | orchestrator | 2025-02-10 09:41:16.489124 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-02-10 09:41:16.489132 | orchestrator | Monday 10 February 2025 09:38:09 +0000 (0:00:09.014) 0:03:51.615 ******* 2025-02-10 09:41:16.489140 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.489148 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.489156 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.489164 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.489172 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.489180 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.489188 | orchestrator | 2025-02-10 09:41:16.489197 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-02-10 09:41:16.489205 | orchestrator | Monday 10 February 2025 09:38:11 +0000 (0:00:02.747) 0:03:54.363 ******* 2025-02-10 09:41:16.489213 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.489221 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.489229 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.489237 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.489245 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.489253 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.489261 | orchestrator | 2025-02-10 09:41:16.489269 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-02-10 09:41:16.489277 | orchestrator | Monday 10 February 2025 09:38:14 +0000 (0:00:02.452) 0:03:56.815 ******* 2025-02-10 09:41:16.489285 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:41:16.489299 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.489307 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:41:16.489316 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.489324 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:41:16.489332 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.489365 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:41:16.489374 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.489386 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:41:16.489395 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.489406 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-10 09:41:16.489414 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.489422 | orchestrator | 2025-02-10 09:41:16.489430 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-02-10 09:41:16.489438 | orchestrator | Monday 10 February 2025 09:38:17 +0000 (0:00:02.869) 0:03:59.685 ******* 2025-02-10 09:41:16.489447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.489456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.489515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.489534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.489568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.489611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.489637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.489650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.489701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.489709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.489717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.489726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.489784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.489792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.489814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.489829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.489865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.489891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.489904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489912 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.489920 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.489946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.489956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.489986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.490038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.490052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.490184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.490201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.490269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490301 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.490328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.490346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.490443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.490468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.490579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.490597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.490662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490701 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.490709 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.490718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.490726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.490784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.490876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.490892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.490928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.490937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.490954 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.490963 | orchestrator | 2025-02-10 09:41:16.490971 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-02-10 09:41:16.490979 | orchestrator | Monday 10 February 2025 09:38:22 +0000 (0:00:04.780) 0:04:04.465 ******* 2025-02-10 09:41:16.490991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.491000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.491060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.491154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.491173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.491200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.491209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.491270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.491340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.491365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.491459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.491511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.491528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-10 09:41:16.491558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.491595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-10 09:41:16.491658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-10 09:41:16.491707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.491763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.491784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.491802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.491822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.491831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.491858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.491876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.491916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.491925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.491941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.491964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.491972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.491992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.492001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.492009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.492022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.492040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.492052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.492061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.492069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-10 09:41:16.492128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.492137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:41:16.492146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:41:16.492166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.492180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.492190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.492202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.492211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-10 09:41:16.492227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-10 09:41:16.492239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-10 09:41:16.492248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-10 09:41:16.492256 | orchestrator | 2025-02-10 09:41:16.492265 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-10 09:41:16.492278 | orchestrator | Monday 10 February 2025 09:38:28 +0000 (0:00:06.334) 0:04:10.800 ******* 2025-02-10 09:41:16.492286 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:41:16.492294 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:41:16.492302 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:41:16.492310 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:41:16.492318 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:41:16.492326 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:41:16.492334 | orchestrator | 2025-02-10 09:41:16.492343 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-02-10 09:41:16.492351 | orchestrator | Monday 10 February 2025 09:38:29 +0000 (0:00:00.816) 0:04:11.617 ******* 2025-02-10 09:41:16.492359 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.492367 | orchestrator | 2025-02-10 09:41:16.492375 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-02-10 09:41:16.492383 | orchestrator | Monday 10 February 2025 09:38:31 +0000 (0:00:02.089) 0:04:13.706 ******* 2025-02-10 09:41:16.492391 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.492399 | orchestrator | 2025-02-10 09:41:16.492407 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-02-10 09:41:16.492415 | orchestrator | Monday 10 February 2025 09:38:33 +0000 (0:00:02.441) 0:04:16.147 ******* 2025-02-10 09:41:16.492423 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.492431 | orchestrator | 2025-02-10 09:41:16.492439 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:41:16.492447 | orchestrator | Monday 10 February 2025 09:39:14 +0000 (0:00:40.854) 0:04:57.002 ******* 2025-02-10 09:41:16.492455 | orchestrator | 2025-02-10 09:41:16.492463 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:41:16.492471 | orchestrator | Monday 10 February 2025 09:39:15 +0000 (0:00:01.362) 0:04:58.365 ******* 2025-02-10 09:41:16.492479 | orchestrator | 2025-02-10 09:41:16.492487 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:41:16.492495 | orchestrator | Monday 10 February 2025 09:39:16 +0000 (0:00:00.308) 0:04:58.673 ******* 2025-02-10 09:41:16.492503 | orchestrator | 2025-02-10 09:41:16.492511 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:41:16.492519 | orchestrator | Monday 10 February 2025 09:39:16 +0000 (0:00:00.254) 0:04:58.927 ******* 2025-02-10 09:41:16.492527 | orchestrator | 2025-02-10 09:41:16.492536 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:41:16.492548 | orchestrator | Monday 10 February 2025 09:39:16 +0000 (0:00:00.172) 0:04:59.100 ******* 2025-02-10 09:41:16.492556 | orchestrator | 2025-02-10 09:41:16.492564 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-10 09:41:16.492572 | orchestrator | Monday 10 February 2025 09:39:17 +0000 (0:00:01.025) 0:05:00.126 ******* 2025-02-10 09:41:16.492580 | orchestrator | 2025-02-10 09:41:16.492588 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-02-10 09:41:16.492597 | orchestrator | Monday 10 February 2025 09:39:18 +0000 (0:00:00.295) 0:05:00.421 ******* 2025-02-10 09:41:16.492605 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.492613 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:16.492621 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:16.492629 | orchestrator | 2025-02-10 09:41:16.492637 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-02-10 09:41:16.492645 | orchestrator | Monday 10 February 2025 09:40:03 +0000 (0:00:45.840) 0:05:46.262 ******* 2025-02-10 09:41:16.492653 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:41:16.492661 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:41:16.492669 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:41:16.492677 | orchestrator | 2025-02-10 09:41:16.492685 | orchestrator | RUNNING HANDLER [neutron : Restart ironic-neutron-agent container] ************* 2025-02-10 09:41:16.492693 | orchestrator | Monday 10 February 2025 09:40:50 +0000 (0:00:46.357) 0:06:32.619 ******* 2025-02-10 09:41:16.492708 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:41:16.492716 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:41:16.492724 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:41:16.492732 | orchestrator | 2025-02-10 09:41:16.492740 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:41:16.492752 | orchestrator | testbed-node-0 : ok=29  changed=18  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-02-10 09:41:19.492420 | orchestrator | testbed-node-1 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-10 09:41:19.492574 | orchestrator | testbed-node-2 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-10 09:41:19.492595 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-10 09:41:19.492610 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-10 09:41:19.492624 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-10 09:41:19.492639 | orchestrator | 2025-02-10 09:41:19.492654 | orchestrator | 2025-02-10 09:41:19.492670 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:41:19.492686 | orchestrator | Monday 10 February 2025 09:41:15 +0000 (0:00:25.297) 0:06:57.916 ******* 2025-02-10 09:41:19.492700 | orchestrator | =============================================================================== 2025-02-10 09:41:19.492713 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 46.36s 2025-02-10 09:41:19.492727 | orchestrator | neutron : Restart neutron-server container ----------------------------- 45.84s 2025-02-10 09:41:19.492741 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.85s 2025-02-10 09:41:19.492755 | orchestrator | neutron : Restart ironic-neutron-agent container ----------------------- 25.30s 2025-02-10 09:41:19.492769 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 11.58s 2025-02-10 09:41:19.492782 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------ 11.00s 2025-02-10 09:41:19.492796 | orchestrator | neutron : Copying over ssh key ----------------------------------------- 10.33s 2025-02-10 09:41:19.492811 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------ 10.08s 2025-02-10 09:41:19.492825 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 9.66s 2025-02-10 09:41:19.492838 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 9.01s 2025-02-10 09:41:19.492852 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.66s 2025-02-10 09:41:19.492867 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 7.20s 2025-02-10 09:41:19.492881 | orchestrator | neutron : Copying over existing policy file ----------------------------- 7.10s 2025-02-10 09:41:19.492898 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 6.94s 2025-02-10 09:41:19.492922 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.67s 2025-02-10 09:41:19.492947 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 6.58s 2025-02-10 09:41:19.492973 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 6.57s 2025-02-10 09:41:19.492998 | orchestrator | Setting sysctl values --------------------------------------------------- 6.38s 2025-02-10 09:41:19.493022 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.33s 2025-02-10 09:41:19.493046 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 6.31s 2025-02-10 09:41:19.493151 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:19.493182 | orchestrator | 2025-02-10 09:41:16 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:19.493207 | orchestrator | 2025-02-10 09:41:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:19.493256 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:19.495496 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:19.497001 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:19.503707 | orchestrator | 2025-02-10 09:41:19 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:22.541338 | orchestrator | 2025-02-10 09:41:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:22.541511 | orchestrator | 2025-02-10 09:41:22 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:25.584730 | orchestrator | 2025-02-10 09:41:22 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:25.584894 | orchestrator | 2025-02-10 09:41:22 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:25.584959 | orchestrator | 2025-02-10 09:41:22 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:25.584993 | orchestrator | 2025-02-10 09:41:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:25.585043 | orchestrator | 2025-02-10 09:41:25 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:25.586657 | orchestrator | 2025-02-10 09:41:25 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:28.629019 | orchestrator | 2025-02-10 09:41:25 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:28.629199 | orchestrator | 2025-02-10 09:41:25 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:28.629222 | orchestrator | 2025-02-10 09:41:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:28.629258 | orchestrator | 2025-02-10 09:41:28 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:28.629707 | orchestrator | 2025-02-10 09:41:28 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:28.631757 | orchestrator | 2025-02-10 09:41:28 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:28.633376 | orchestrator | 2025-02-10 09:41:28 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:28.634903 | orchestrator | 2025-02-10 09:41:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:31.663566 | orchestrator | 2025-02-10 09:41:31 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:31.664109 | orchestrator | 2025-02-10 09:41:31 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:31.664618 | orchestrator | 2025-02-10 09:41:31 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:31.664843 | orchestrator | 2025-02-10 09:41:31 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:31.664956 | orchestrator | 2025-02-10 09:41:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:34.713723 | orchestrator | 2025-02-10 09:41:34 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:34.714287 | orchestrator | 2025-02-10 09:41:34 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:34.715652 | orchestrator | 2025-02-10 09:41:34 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:34.718487 | orchestrator | 2025-02-10 09:41:34 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:37.759208 | orchestrator | 2025-02-10 09:41:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:37.759361 | orchestrator | 2025-02-10 09:41:37 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:37.760619 | orchestrator | 2025-02-10 09:41:37 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:37.760854 | orchestrator | 2025-02-10 09:41:37 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:37.761975 | orchestrator | 2025-02-10 09:41:37 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:40.819073 | orchestrator | 2025-02-10 09:41:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:40.819313 | orchestrator | 2025-02-10 09:41:40 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:40.819891 | orchestrator | 2025-02-10 09:41:40 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:40.820629 | orchestrator | 2025-02-10 09:41:40 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:40.820671 | orchestrator | 2025-02-10 09:41:40 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:40.820963 | orchestrator | 2025-02-10 09:41:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:43.877573 | orchestrator | 2025-02-10 09:41:43 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:43.878239 | orchestrator | 2025-02-10 09:41:43 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:43.880560 | orchestrator | 2025-02-10 09:41:43 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:43.881555 | orchestrator | 2025-02-10 09:41:43 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:46.911475 | orchestrator | 2025-02-10 09:41:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:46.911619 | orchestrator | 2025-02-10 09:41:46 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:46.912121 | orchestrator | 2025-02-10 09:41:46 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:46.912151 | orchestrator | 2025-02-10 09:41:46 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:46.912823 | orchestrator | 2025-02-10 09:41:46 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:49.953595 | orchestrator | 2025-02-10 09:41:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:49.953833 | orchestrator | 2025-02-10 09:41:49 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:53.000305 | orchestrator | 2025-02-10 09:41:49 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:53.000447 | orchestrator | 2025-02-10 09:41:49 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:53.000468 | orchestrator | 2025-02-10 09:41:49 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:53.000520 | orchestrator | 2025-02-10 09:41:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:53.000555 | orchestrator | 2025-02-10 09:41:52 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:53.002798 | orchestrator | 2025-02-10 09:41:52 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:53.002837 | orchestrator | 2025-02-10 09:41:52 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:53.008283 | orchestrator | 2025-02-10 09:41:53 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:56.050406 | orchestrator | 2025-02-10 09:41:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:56.050587 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:56.051108 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:56.051148 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:56.052038 | orchestrator | 2025-02-10 09:41:56 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:41:59.095887 | orchestrator | 2025-02-10 09:41:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:41:59.096009 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:41:59.097716 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:41:59.097747 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:41:59.097760 | orchestrator | 2025-02-10 09:41:59 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:02.139573 | orchestrator | 2025-02-10 09:41:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:02.139695 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:02.140717 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:02.140775 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:02.142411 | orchestrator | 2025-02-10 09:42:02 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:05.189782 | orchestrator | 2025-02-10 09:42:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:05.189954 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:05.194272 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:05.194315 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:05.196329 | orchestrator | 2025-02-10 09:42:05 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:08.239742 | orchestrator | 2025-02-10 09:42:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:08.239980 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:08.240795 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:08.240887 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:08.240913 | orchestrator | 2025-02-10 09:42:08 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:11.275469 | orchestrator | 2025-02-10 09:42:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:11.275643 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:11.276245 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:11.276276 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:11.276300 | orchestrator | 2025-02-10 09:42:11 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:14.324209 | orchestrator | 2025-02-10 09:42:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:14.324382 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:14.325086 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:14.325178 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:14.326065 | orchestrator | 2025-02-10 09:42:14 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:17.369442 | orchestrator | 2025-02-10 09:42:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:17.369717 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:17.369751 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:17.370763 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:17.371844 | orchestrator | 2025-02-10 09:42:17 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:17.372000 | orchestrator | 2025-02-10 09:42:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:20.423209 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:20.424867 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:20.426754 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:20.430123 | orchestrator | 2025-02-10 09:42:20 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:23.463747 | orchestrator | 2025-02-10 09:42:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:23.463898 | orchestrator | 2025-02-10 09:42:23 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:26.497590 | orchestrator | 2025-02-10 09:42:23 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:26.497733 | orchestrator | 2025-02-10 09:42:23 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:26.497756 | orchestrator | 2025-02-10 09:42:23 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:26.497783 | orchestrator | 2025-02-10 09:42:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:26.497931 | orchestrator | 2025-02-10 09:42:26 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:26.497993 | orchestrator | 2025-02-10 09:42:26 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:26.498075 | orchestrator | 2025-02-10 09:42:26 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:26.498718 | orchestrator | 2025-02-10 09:42:26 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:29.534894 | orchestrator | 2025-02-10 09:42:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:29.535056 | orchestrator | 2025-02-10 09:42:29 | INFO  | Task eab88b9b-eaf5-47c6-bd01-96499e170ab9 is in state STARTED 2025-02-10 09:42:29.537620 | orchestrator | 2025-02-10 09:42:29 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:29.538128 | orchestrator | 2025-02-10 09:42:29 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:29.538891 | orchestrator | 2025-02-10 09:42:29 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:29.543192 | orchestrator | 2025-02-10 09:42:29 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:32.619628 | orchestrator | 2025-02-10 09:42:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:32.619782 | orchestrator | 2025-02-10 09:42:32 | INFO  | Task eab88b9b-eaf5-47c6-bd01-96499e170ab9 is in state STARTED 2025-02-10 09:42:32.622632 | orchestrator | 2025-02-10 09:42:32 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:32.623234 | orchestrator | 2025-02-10 09:42:32 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:32.624432 | orchestrator | 2025-02-10 09:42:32 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:32.630719 | orchestrator | 2025-02-10 09:42:32 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:35.705503 | orchestrator | 2025-02-10 09:42:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:35.705664 | orchestrator | 2025-02-10 09:42:35 | INFO  | Task eab88b9b-eaf5-47c6-bd01-96499e170ab9 is in state STARTED 2025-02-10 09:42:35.707416 | orchestrator | 2025-02-10 09:42:35 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:35.708520 | orchestrator | 2025-02-10 09:42:35 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:35.710530 | orchestrator | 2025-02-10 09:42:35 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:35.712530 | orchestrator | 2025-02-10 09:42:35 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:38.759442 | orchestrator | 2025-02-10 09:42:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:38.759604 | orchestrator | 2025-02-10 09:42:38 | INFO  | Task eab88b9b-eaf5-47c6-bd01-96499e170ab9 is in state STARTED 2025-02-10 09:42:38.760343 | orchestrator | 2025-02-10 09:42:38 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:38.760414 | orchestrator | 2025-02-10 09:42:38 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:38.761000 | orchestrator | 2025-02-10 09:42:38 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:38.762355 | orchestrator | 2025-02-10 09:42:38 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:41.821949 | orchestrator | 2025-02-10 09:42:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:41.822399 | orchestrator | 2025-02-10 09:42:41 | INFO  | Task eab88b9b-eaf5-47c6-bd01-96499e170ab9 is in state STARTED 2025-02-10 09:42:41.824351 | orchestrator | 2025-02-10 09:42:41 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:41.824417 | orchestrator | 2025-02-10 09:42:41 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:41.828350 | orchestrator | 2025-02-10 09:42:41 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:41.829452 | orchestrator | 2025-02-10 09:42:41 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:44.884946 | orchestrator | 2025-02-10 09:42:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:44.885211 | orchestrator | 2025-02-10 09:42:44 | INFO  | Task eab88b9b-eaf5-47c6-bd01-96499e170ab9 is in state SUCCESS 2025-02-10 09:42:44.886707 | orchestrator | 2025-02-10 09:42:44 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:44.889561 | orchestrator | 2025-02-10 09:42:44 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:44.892230 | orchestrator | 2025-02-10 09:42:44 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:44.893531 | orchestrator | 2025-02-10 09:42:44 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:44.893797 | orchestrator | 2025-02-10 09:42:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:47.955405 | orchestrator | 2025-02-10 09:42:47 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:47.956022 | orchestrator | 2025-02-10 09:42:47 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:47.957342 | orchestrator | 2025-02-10 09:42:47 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:47.958578 | orchestrator | 2025-02-10 09:42:47 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:51.002621 | orchestrator | 2025-02-10 09:42:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:51.002812 | orchestrator | 2025-02-10 09:42:50 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:51.007750 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:51.008699 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:51.008734 | orchestrator | 2025-02-10 09:42:51 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:54.072088 | orchestrator | 2025-02-10 09:42:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:54.072284 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:57.095694 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:57.095816 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:57.095830 | orchestrator | 2025-02-10 09:42:54 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:57.095843 | orchestrator | 2025-02-10 09:42:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:42:57.095892 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:42:57.097164 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:42:57.098286 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:42:57.099310 | orchestrator | 2025-02-10 09:42:57 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:42:57.099507 | orchestrator | 2025-02-10 09:42:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:00.138588 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:00.138807 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:00.138836 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:00.139588 | orchestrator | 2025-02-10 09:43:00 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:03.169604 | orchestrator | 2025-02-10 09:43:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:03.169754 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:03.170801 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:03.170844 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:03.171278 | orchestrator | 2025-02-10 09:43:03 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:06.206625 | orchestrator | 2025-02-10 09:43:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:06.206980 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:06.208436 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:06.208475 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:06.208497 | orchestrator | 2025-02-10 09:43:06 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:09.252970 | orchestrator | 2025-02-10 09:43:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:09.253275 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:09.253942 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:09.254097 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:09.254170 | orchestrator | 2025-02-10 09:43:09 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:12.298461 | orchestrator | 2025-02-10 09:43:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:12.298596 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:12.298913 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:12.298938 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:12.299924 | orchestrator | 2025-02-10 09:43:12 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:12.304299 | orchestrator | 2025-02-10 09:43:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:15.335293 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:15.335918 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:15.335985 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:15.337589 | orchestrator | 2025-02-10 09:43:15 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:18.413455 | orchestrator | 2025-02-10 09:43:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:18.413604 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:18.417827 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:18.417883 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:21.459719 | orchestrator | 2025-02-10 09:43:18 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:21.459869 | orchestrator | 2025-02-10 09:43:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:21.459909 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:21.466726 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:21.468804 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:21.469701 | orchestrator | 2025-02-10 09:43:21 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:24.510424 | orchestrator | 2025-02-10 09:43:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:24.510579 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:24.513558 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:24.513608 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:27.557984 | orchestrator | 2025-02-10 09:43:24 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:27.558258 | orchestrator | 2025-02-10 09:43:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:27.558301 | orchestrator | 2025-02-10 09:43:27 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:30.592548 | orchestrator | 2025-02-10 09:43:27 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:30.592684 | orchestrator | 2025-02-10 09:43:27 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:30.592703 | orchestrator | 2025-02-10 09:43:27 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:30.592720 | orchestrator | 2025-02-10 09:43:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:30.592753 | orchestrator | 2025-02-10 09:43:30 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:30.593948 | orchestrator | 2025-02-10 09:43:30 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:30.593992 | orchestrator | 2025-02-10 09:43:30 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:30.594878 | orchestrator | 2025-02-10 09:43:30 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:33.632288 | orchestrator | 2025-02-10 09:43:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:33.632452 | orchestrator | 2025-02-10 09:43:33 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:33.633226 | orchestrator | 2025-02-10 09:43:33 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:33.633264 | orchestrator | 2025-02-10 09:43:33 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:33.634222 | orchestrator | 2025-02-10 09:43:33 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:36.680515 | orchestrator | 2025-02-10 09:43:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:36.680692 | orchestrator | 2025-02-10 09:43:36 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:36.681410 | orchestrator | 2025-02-10 09:43:36 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:36.681447 | orchestrator | 2025-02-10 09:43:36 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:36.682276 | orchestrator | 2025-02-10 09:43:36 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:36.682456 | orchestrator | 2025-02-10 09:43:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:39.724419 | orchestrator | 2025-02-10 09:43:39 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:39.727606 | orchestrator | 2025-02-10 09:43:39 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:39.729732 | orchestrator | 2025-02-10 09:43:39 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:39.731536 | orchestrator | 2025-02-10 09:43:39 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:42.799933 | orchestrator | 2025-02-10 09:43:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:42.800085 | orchestrator | 2025-02-10 09:43:42 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:42.801916 | orchestrator | 2025-02-10 09:43:42 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:45.853899 | orchestrator | 2025-02-10 09:43:42 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:45.854092 | orchestrator | 2025-02-10 09:43:42 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:45.854169 | orchestrator | 2025-02-10 09:43:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:45.854204 | orchestrator | 2025-02-10 09:43:45 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:45.854443 | orchestrator | 2025-02-10 09:43:45 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:45.855003 | orchestrator | 2025-02-10 09:43:45 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:45.856257 | orchestrator | 2025-02-10 09:43:45 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:48.903454 | orchestrator | 2025-02-10 09:43:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:48.903608 | orchestrator | 2025-02-10 09:43:48 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:48.903973 | orchestrator | 2025-02-10 09:43:48 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:48.904902 | orchestrator | 2025-02-10 09:43:48 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:48.905667 | orchestrator | 2025-02-10 09:43:48 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:51.948522 | orchestrator | 2025-02-10 09:43:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:51.948660 | orchestrator | 2025-02-10 09:43:51 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:51.949223 | orchestrator | 2025-02-10 09:43:51 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:51.949246 | orchestrator | 2025-02-10 09:43:51 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:51.949851 | orchestrator | 2025-02-10 09:43:51 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:51.949923 | orchestrator | 2025-02-10 09:43:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:55.035398 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:55.038789 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:55.042459 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:55.046777 | orchestrator | 2025-02-10 09:43:55 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:43:58.111224 | orchestrator | 2025-02-10 09:43:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:43:58.111390 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:43:58.111752 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:43:58.111788 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:43:58.113282 | orchestrator | 2025-02-10 09:43:58 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:01.166538 | orchestrator | 2025-02-10 09:43:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:01.166738 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:01.168075 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:01.168209 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:01.168654 | orchestrator | 2025-02-10 09:44:01 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:04.221160 | orchestrator | 2025-02-10 09:44:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:04.221356 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:04.227576 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:07.318821 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:07.318939 | orchestrator | 2025-02-10 09:44:04 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:07.318952 | orchestrator | 2025-02-10 09:44:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:07.319003 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:07.321039 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:07.321478 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:07.322635 | orchestrator | 2025-02-10 09:44:07 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:10.369883 | orchestrator | 2025-02-10 09:44:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:10.370088 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:10.377877 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:13.400233 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:13.400375 | orchestrator | 2025-02-10 09:44:10 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:13.400395 | orchestrator | 2025-02-10 09:44:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:13.400429 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:13.402380 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:13.402447 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:13.402945 | orchestrator | 2025-02-10 09:44:13 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:13.403054 | orchestrator | 2025-02-10 09:44:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:16.449420 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:16.449678 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:16.449711 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:16.450695 | orchestrator | 2025-02-10 09:44:16 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:16.450813 | orchestrator | 2025-02-10 09:44:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:19.490522 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:19.491856 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:19.492205 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:19.493152 | orchestrator | 2025-02-10 09:44:19 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:22.532965 | orchestrator | 2025-02-10 09:44:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:22.533097 | orchestrator | 2025-02-10 09:44:22 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:22.533621 | orchestrator | 2025-02-10 09:44:22 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:22.534549 | orchestrator | 2025-02-10 09:44:22 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:22.536035 | orchestrator | 2025-02-10 09:44:22 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:25.604828 | orchestrator | 2025-02-10 09:44:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:25.604979 | orchestrator | 2025-02-10 09:44:25 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:25.607723 | orchestrator | 2025-02-10 09:44:25 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:25.607751 | orchestrator | 2025-02-10 09:44:25 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:25.607769 | orchestrator | 2025-02-10 09:44:25 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:28.642336 | orchestrator | 2025-02-10 09:44:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:28.643272 | orchestrator | 2025-02-10 09:44:28 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:28.644056 | orchestrator | 2025-02-10 09:44:28 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:28.644305 | orchestrator | 2025-02-10 09:44:28 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:31.677701 | orchestrator | 2025-02-10 09:44:28 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:31.677837 | orchestrator | 2025-02-10 09:44:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:31.677873 | orchestrator | 2025-02-10 09:44:31 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:31.679706 | orchestrator | 2025-02-10 09:44:31 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:31.685716 | orchestrator | 2025-02-10 09:44:31 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:31.686626 | orchestrator | 2025-02-10 09:44:31 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:31.686813 | orchestrator | 2025-02-10 09:44:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:34.731178 | orchestrator | 2025-02-10 09:44:34 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:34.731583 | orchestrator | 2025-02-10 09:44:34 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:34.732665 | orchestrator | 2025-02-10 09:44:34 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:34.734582 | orchestrator | 2025-02-10 09:44:34 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:37.778421 | orchestrator | 2025-02-10 09:44:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:37.778594 | orchestrator | 2025-02-10 09:44:37 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:37.779619 | orchestrator | 2025-02-10 09:44:37 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:37.780714 | orchestrator | 2025-02-10 09:44:37 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:37.780823 | orchestrator | 2025-02-10 09:44:37 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:37.781331 | orchestrator | 2025-02-10 09:44:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:40.824589 | orchestrator | 2025-02-10 09:44:40 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:40.824862 | orchestrator | 2025-02-10 09:44:40 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:40.825530 | orchestrator | 2025-02-10 09:44:40 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:40.826932 | orchestrator | 2025-02-10 09:44:40 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:40.827354 | orchestrator | 2025-02-10 09:44:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:43.893221 | orchestrator | 2025-02-10 09:44:43 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:43.893430 | orchestrator | 2025-02-10 09:44:43 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:43.897361 | orchestrator | 2025-02-10 09:44:43 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:46.931524 | orchestrator | 2025-02-10 09:44:43 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:46.931659 | orchestrator | 2025-02-10 09:44:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:46.931696 | orchestrator | 2025-02-10 09:44:46 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:46.932123 | orchestrator | 2025-02-10 09:44:46 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:46.932795 | orchestrator | 2025-02-10 09:44:46 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:46.933671 | orchestrator | 2025-02-10 09:44:46 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:49.967946 | orchestrator | 2025-02-10 09:44:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:49.968100 | orchestrator | 2025-02-10 09:44:49 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:49.968457 | orchestrator | 2025-02-10 09:44:49 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:49.968487 | orchestrator | 2025-02-10 09:44:49 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:49.968508 | orchestrator | 2025-02-10 09:44:49 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:53.009164 | orchestrator | 2025-02-10 09:44:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:53.009373 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:53.010420 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:53.013288 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:53.014580 | orchestrator | 2025-02-10 09:44:53 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:56.054744 | orchestrator | 2025-02-10 09:44:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:56.054935 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:59.083846 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:59.083999 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state STARTED 2025-02-10 09:44:59.084030 | orchestrator | 2025-02-10 09:44:56 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:44:59.084046 | orchestrator | 2025-02-10 09:44:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:44:59.084115 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state STARTED 2025-02-10 09:44:59.085918 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:44:59.085971 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task 1d1fbe09-e91f-4220-a30e-44cd6c8de4ae is in state SUCCESS 2025-02-10 09:44:59.087733 | orchestrator | 2025-02-10 09:44:59.087785 | orchestrator | None 2025-02-10 09:44:59.087821 | orchestrator | 2025-02-10 09:44:59.087836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:44:59.087852 | orchestrator | 2025-02-10 09:44:59.087891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:44:59.087909 | orchestrator | Monday 10 February 2025 09:38:45 +0000 (0:00:00.288) 0:00:00.288 ******* 2025-02-10 09:44:59.087933 | orchestrator | ok: [testbed-manager] 2025-02-10 09:44:59.088106 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:44:59.088120 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:44:59.088218 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:44:59.088247 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:44:59.088275 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:44:59.088297 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:44:59.088314 | orchestrator | 2025-02-10 09:44:59.088330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:44:59.088346 | orchestrator | Monday 10 February 2025 09:38:46 +0000 (0:00:01.010) 0:00:01.299 ******* 2025-02-10 09:44:59.088379 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088395 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088411 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088428 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088467 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088484 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088519 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-02-10 09:44:59.088535 | orchestrator | 2025-02-10 09:44:59.088552 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-02-10 09:44:59.088568 | orchestrator | 2025-02-10 09:44:59.088584 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-10 09:44:59.088600 | orchestrator | Monday 10 February 2025 09:38:47 +0000 (0:00:01.062) 0:00:02.361 ******* 2025-02-10 09:44:59.088615 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:44:59.088631 | orchestrator | 2025-02-10 09:44:59.088645 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-02-10 09:44:59.088659 | orchestrator | Monday 10 February 2025 09:38:49 +0000 (0:00:01.645) 0:00:04.007 ******* 2025-02-10 09:44:59.088676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.088697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.088728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.088760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:44:59.088776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.088791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.088806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.088829 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.088844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.088859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.088881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.088895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.088911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.088926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.088948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.088963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.088978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.089022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.089037 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.089051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.089066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.089089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.089104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.089188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.089241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.089258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.089283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.089361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.089399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.089424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.089460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.089480 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:44:59.089504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.089519 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.089534 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.089557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.090264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.090391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.090448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.090466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.090535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.090559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.090573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.090628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.090660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.090697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.090727 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.090742 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.090808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.090872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.090899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.090925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.090963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.090981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.090995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.091035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.091049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.091078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.091103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.091154 | orchestrator | 2025-02-10 09:44:59.091182 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-10 09:44:59.091207 | orchestrator | Monday 10 February 2025 09:38:53 +0000 (0:00:04.407) 0:00:08.414 ******* 2025-02-10 09:44:59.091245 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:44:59.091268 | orchestrator | 2025-02-10 09:44:59.091292 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-02-10 09:44:59.091316 | orchestrator | Monday 10 February 2025 09:38:55 +0000 (0:00:02.107) 0:00:10.522 ******* 2025-02-10 09:44:59.091340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091401 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:44:59.091416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091585 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.091615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091683 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091784 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:44:59.091800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.091814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.091885 | orchestrator | 2025-02-10 09:44:59.091899 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-02-10 09:44:59.091914 | orchestrator | Monday 10 February 2025 09:39:04 +0000 (0:00:08.603) 0:00:19.126 ******* 2025-02-10 09:44:59.091932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.091956 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.091980 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092005 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.092045 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092070 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.092101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092215 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.092230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092322 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.092336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092417 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.092438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092483 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.092498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092549 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.092564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092615 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.092629 | orchestrator | 2025-02-10 09:44:59.092644 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-02-10 09:44:59.092658 | orchestrator | Monday 10 February 2025 09:39:07 +0000 (0:00:02.420) 0:00:21.546 ******* 2025-02-10 09:44:59.092673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.092690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.092785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.092909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.092956 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.092973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.092989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.093004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.093028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.093063 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.093088 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.093113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.093207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093254 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.093268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.093283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.093297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.093320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.093350 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.093364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.093385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093415 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.093429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-10 09:44:59.093444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.093480 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.093494 | orchestrator | 2025-02-10 09:44:59.093509 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-02-10 09:44:59.093524 | orchestrator | Monday 10 February 2025 09:39:10 +0000 (0:00:03.231) 0:00:24.778 ******* 2025-02-10 09:44:59.093538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.093560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.093575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.093589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.093611 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:44:59.094115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.094191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.094228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.094253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.094278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.094302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.094341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.094430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.094449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094498 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.094523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.094558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.094621 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.094663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.094721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.094739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.094769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.094820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.094844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.094862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.094886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.094920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.094938 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.094958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.094995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.095031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.095056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.095081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.095187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.095269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.095294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.095320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.095347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.095398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.095424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.095447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.095475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.095501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.095550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095579 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:44:59.095605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.095631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.095658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.095724 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.095808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.095837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.095867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.095920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.095950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.095964 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.095979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.095994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.096016 | orchestrator | 2025-02-10 09:44:59.096030 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-02-10 09:44:59.096045 | orchestrator | Monday 10 February 2025 09:39:21 +0000 (0:00:11.374) 0:00:36.153 ******* 2025-02-10 09:44:59.096059 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:44:59.096074 | orchestrator | 2025-02-10 09:44:59.096089 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-02-10 09:44:59.096103 | orchestrator | Monday 10 February 2025 09:39:22 +0000 (0:00:00.859) 0:00:37.012 ******* 2025-02-10 09:44:59.096117 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096215 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096234 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096249 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096264 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096279 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096304 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096319 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096340 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096355 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062612, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.096370 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096384 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096397 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096417 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096437 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096467 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096490 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096511 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096652 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096709 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096732 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096745 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096758 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1062599, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.096793 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096806 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096827 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096841 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096901 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096917 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096937 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096951 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096964 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.096984 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097008 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097022 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097035 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097056 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097069 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097082 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097102 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097125 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097163 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1062580, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.097184 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097197 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097210 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097244 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097276 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097305 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097325 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097338 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097375 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097394 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097408 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097421 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097446 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097459 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097482 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1062582, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.097496 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097516 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097529 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097542 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097562 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097576 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097601 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097615 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097635 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097649 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097668 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097682 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097704 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097718 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097731 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097752 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097766 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097785 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097798 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1062595, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.097820 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097834 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097847 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097865 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097879 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097899 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097921 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097934 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097947 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097960 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.097978 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098501 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098533 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098565 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098580 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098593 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098656 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098683 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1062584, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2595954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.098709 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098723 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098736 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098749 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098763 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098804 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098827 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.098841 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098855 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.098883 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098897 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098911 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098924 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.098942 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098955 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.098969 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.098989 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.099029 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-10 09:44:59.099044 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.099068 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1062591, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099083 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1062601, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2625957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099096 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1062609, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2645955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099111 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1062632, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2685957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099125 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1062604, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2635956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099289 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062583, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2585955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099345 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1062589, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099400 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1062579, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2575955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1062596, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2615955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099426 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1062629, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2675955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099438 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1062586, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2605956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099450 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1062613, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2655957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-10 09:44:59.099468 | orchestrator | 2025-02-10 09:44:59.099480 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-02-10 09:44:59.099490 | orchestrator | Monday 10 February 2025 09:40:59 +0000 (0:01:37.345) 0:02:14.358 ******* 2025-02-10 09:44:59.099501 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:44:59.099511 | orchestrator | 2025-02-10 09:44:59.099521 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-02-10 09:44:59.099531 | orchestrator | Monday 10 February 2025 09:41:00 +0000 (0:00:01.117) 0:02:15.475 ******* 2025-02-10 09:44:59.099542 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099552 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099562 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099573 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099583 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.099598 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:44:59.099608 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099658 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099671 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099682 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.099692 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:44:59.099702 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099723 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099744 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.099755 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-10 09:44:59.099765 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099785 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099806 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.099816 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099836 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099857 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.099868 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099917 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099937 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.099947 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.099957 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099967 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-02-10 09:44:59.099977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-10 09:44:59.099988 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-02-10 09:44:59.100004 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-10 09:44:59.100015 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:44:59.100025 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:44:59.100035 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:44:59.100045 | orchestrator | 2025-02-10 09:44:59.100055 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-02-10 09:44:59.100066 | orchestrator | Monday 10 February 2025 09:41:03 +0000 (0:00:02.991) 0:02:18.467 ******* 2025-02-10 09:44:59.100076 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:44:59.100086 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.100097 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:44:59.100107 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.100117 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:44:59.100142 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:44:59.100153 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.100163 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.100174 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:44:59.100185 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.100195 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-10 09:44:59.100205 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.100215 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-02-10 09:44:59.100225 | orchestrator | 2025-02-10 09:44:59.100235 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-02-10 09:44:59.100245 | orchestrator | Monday 10 February 2025 09:41:22 +0000 (0:00:18.398) 0:02:36.866 ******* 2025-02-10 09:44:59.100256 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:44:59.100266 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.100276 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:44:59.100286 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.100297 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:44:59.100307 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.100318 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:44:59.100328 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.100343 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:44:59.100353 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.100369 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-10 09:44:59.100380 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.100390 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-02-10 09:44:59.100400 | orchestrator | 2025-02-10 09:44:59.100410 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-02-10 09:44:59.100420 | orchestrator | Monday 10 February 2025 09:41:27 +0000 (0:00:04.991) 0:02:41.858 ******* 2025-02-10 09:44:59.100431 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:44:59.100441 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.100451 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:44:59.100467 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.100477 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:44:59.100488 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.100498 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:44:59.100508 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.100518 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:44:59.100528 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.100539 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-10 09:44:59.100549 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.100560 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-02-10 09:44:59.100570 | orchestrator | 2025-02-10 09:44:59.100580 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-02-10 09:44:59.100591 | orchestrator | Monday 10 February 2025 09:41:31 +0000 (0:00:04.568) 0:02:46.427 ******* 2025-02-10 09:44:59.100601 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:44:59.100611 | orchestrator | 2025-02-10 09:44:59.100622 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-02-10 09:44:59.100632 | orchestrator | Monday 10 February 2025 09:41:32 +0000 (0:00:00.419) 0:02:46.846 ******* 2025-02-10 09:44:59.100642 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.100652 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.100662 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.100672 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.100683 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.100693 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.100703 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.100714 | orchestrator | 2025-02-10 09:44:59.100724 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-02-10 09:44:59.100734 | orchestrator | Monday 10 February 2025 09:41:33 +0000 (0:00:00.895) 0:02:47.742 ******* 2025-02-10 09:44:59.100744 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.100754 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.100764 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.100775 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.100785 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:44:59.100800 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:44:59.100811 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:44:59.100821 | orchestrator | 2025-02-10 09:44:59.100831 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-02-10 09:44:59.100842 | orchestrator | Monday 10 February 2025 09:41:39 +0000 (0:00:06.744) 0:02:54.486 ******* 2025-02-10 09:44:59.100852 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100862 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.100873 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100883 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.100893 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100904 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.100914 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100924 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.100935 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100951 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.100962 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100972 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.100982 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-10 09:44:59.100993 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.101007 | orchestrator | 2025-02-10 09:44:59.101017 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-02-10 09:44:59.101028 | orchestrator | Monday 10 February 2025 09:41:44 +0000 (0:00:04.549) 0:02:59.036 ******* 2025-02-10 09:44:59.101038 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:44:59.101048 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.101064 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:44:59.101075 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.101085 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:44:59.101095 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.101105 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:44:59.101115 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.101138 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:44:59.101149 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.101160 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-10 09:44:59.101170 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.101180 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-02-10 09:44:59.101190 | orchestrator | 2025-02-10 09:44:59.101200 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-02-10 09:44:59.101211 | orchestrator | Monday 10 February 2025 09:41:49 +0000 (0:00:05.337) 0:03:04.374 ******* 2025-02-10 09:44:59.101221 | orchestrator | [WARNING]: Skipped 2025-02-10 09:44:59.101231 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-02-10 09:44:59.101241 | orchestrator | due to this access issue: 2025-02-10 09:44:59.101252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-02-10 09:44:59.101262 | orchestrator | not a directory 2025-02-10 09:44:59.101272 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-10 09:44:59.101282 | orchestrator | 2025-02-10 09:44:59.101297 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-02-10 09:44:59.101308 | orchestrator | Monday 10 February 2025 09:41:54 +0000 (0:00:04.945) 0:03:09.319 ******* 2025-02-10 09:44:59.101318 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.101328 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.101338 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.101348 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.101358 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.101368 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.101379 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.101389 | orchestrator | 2025-02-10 09:44:59.101399 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-02-10 09:44:59.101409 | orchestrator | Monday 10 February 2025 09:41:56 +0000 (0:00:02.169) 0:03:11.489 ******* 2025-02-10 09:44:59.101420 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.101430 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.101449 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.101459 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.101470 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.101480 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.101490 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.101500 | orchestrator | 2025-02-10 09:44:59.101510 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-02-10 09:44:59.101521 | orchestrator | Monday 10 February 2025 09:41:59 +0000 (0:00:02.169) 0:03:13.658 ******* 2025-02-10 09:44:59.101531 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101541 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.101551 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101561 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.101571 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101581 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.101592 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101602 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.101612 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101622 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.101632 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101642 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.101652 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-10 09:44:59.101663 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.101673 | orchestrator | 2025-02-10 09:44:59.101683 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-02-10 09:44:59.101693 | orchestrator | Monday 10 February 2025 09:42:04 +0000 (0:00:05.180) 0:03:18.838 ******* 2025-02-10 09:44:59.101703 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101714 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:44:59.101724 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101734 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:44:59.101744 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101754 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:44:59.101769 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101780 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:44:59.101790 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101800 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:44:59.101810 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101821 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:44:59.101832 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-10 09:44:59.101842 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:44:59.101853 | orchestrator | 2025-02-10 09:44:59.101863 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-02-10 09:44:59.101873 | orchestrator | Monday 10 February 2025 09:42:09 +0000 (0:00:05.220) 0:03:24.059 ******* 2025-02-10 09:44:59.101884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.101914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.101927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.101938 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-10 09:44:59.101954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.101965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.101982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.102001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.102013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-10 09:44:59.102082 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.102098 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.102120 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.102202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.102241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-10 09:44:59.102305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.102378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.102441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.102505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.102545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102579 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-10 09:44:59.102596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.102643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.102751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102783 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.102796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102813 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.102851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-10 09:44:59.102878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-10 09:44:59.102895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-10 09:44:59.102904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-10 09:44:59.102913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-10 09:44:59.102936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-10 09:44:59.102945 | orchestrator | 2025-02-10 09:44:59.102954 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-02-10 09:44:59.102962 | orchestrator | Monday 10 February 2025 09:42:16 +0000 (0:00:07.131) 0:03:31.191 ******* 2025-02-10 09:44:59.102971 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-10 09:44:59.102980 | orchestrator | 2025-02-10 09:44:59.102989 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.102998 | orchestrator | Monday 10 February 2025 09:42:21 +0000 (0:00:04.584) 0:03:35.775 ******* 2025-02-10 09:44:59.103007 | orchestrator | 2025-02-10 09:44:59.103015 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.103024 | orchestrator | Monday 10 February 2025 09:42:21 +0000 (0:00:00.125) 0:03:35.901 ******* 2025-02-10 09:44:59.103032 | orchestrator | 2025-02-10 09:44:59.103041 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.103050 | orchestrator | Monday 10 February 2025 09:42:21 +0000 (0:00:00.100) 0:03:36.001 ******* 2025-02-10 09:44:59.103058 | orchestrator | 2025-02-10 09:44:59.103071 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.103080 | orchestrator | Monday 10 February 2025 09:42:21 +0000 (0:00:00.090) 0:03:36.091 ******* 2025-02-10 09:44:59.103089 | orchestrator | 2025-02-10 09:44:59.103097 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.103106 | orchestrator | Monday 10 February 2025 09:42:21 +0000 (0:00:00.326) 0:03:36.418 ******* 2025-02-10 09:44:59.103115 | orchestrator | 2025-02-10 09:44:59.103123 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.103145 | orchestrator | Monday 10 February 2025 09:42:21 +0000 (0:00:00.083) 0:03:36.502 ******* 2025-02-10 09:44:59.103154 | orchestrator | 2025-02-10 09:44:59.103164 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-10 09:44:59.103172 | orchestrator | Monday 10 February 2025 09:42:22 +0000 (0:00:00.068) 0:03:36.570 ******* 2025-02-10 09:44:59.103181 | orchestrator | 2025-02-10 09:44:59.103189 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-02-10 09:44:59.103198 | orchestrator | Monday 10 February 2025 09:42:22 +0000 (0:00:00.084) 0:03:36.654 ******* 2025-02-10 09:44:59.103207 | orchestrator | changed: [testbed-manager] 2025-02-10 09:44:59.103215 | orchestrator | 2025-02-10 09:44:59.103224 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-02-10 09:44:59.103232 | orchestrator | Monday 10 February 2025 09:42:48 +0000 (0:00:26.102) 0:04:02.756 ******* 2025-02-10 09:44:59.103241 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:44:59.103249 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:44:59.103258 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:44:59.103266 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:44:59.103275 | orchestrator | changed: [testbed-manager] 2025-02-10 09:44:59.103283 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:44:59.103292 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:44:59.103300 | orchestrator | 2025-02-10 09:44:59.103309 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-02-10 09:44:59.103318 | orchestrator | Monday 10 February 2025 09:43:16 +0000 (0:00:28.276) 0:04:31.033 ******* 2025-02-10 09:44:59.103327 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:44:59.103341 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:44:59.103350 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:44:59.103358 | orchestrator | 2025-02-10 09:44:59.103370 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-02-10 09:44:59.103379 | orchestrator | Monday 10 February 2025 09:43:29 +0000 (0:00:13.409) 0:04:44.443 ******* 2025-02-10 09:44:59.103388 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:44:59.103397 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:44:59.103406 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:44:59.103415 | orchestrator | 2025-02-10 09:44:59.103423 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-02-10 09:44:59.103432 | orchestrator | Monday 10 February 2025 09:43:45 +0000 (0:00:15.363) 0:04:59.807 ******* 2025-02-10 09:44:59.103440 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:44:59.103449 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:44:59.103465 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:44:59.103475 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:44:59.103484 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:44:59.103493 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:44:59.103502 | orchestrator | changed: [testbed-manager] 2025-02-10 09:44:59.103510 | orchestrator | 2025-02-10 09:44:59.103519 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-02-10 09:44:59.103527 | orchestrator | Monday 10 February 2025 09:44:06 +0000 (0:00:21.669) 0:05:21.476 ******* 2025-02-10 09:44:59.103536 | orchestrator | changed: [testbed-manager] 2025-02-10 09:44:59.103544 | orchestrator | 2025-02-10 09:44:59.103553 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-02-10 09:44:59.103561 | orchestrator | Monday 10 February 2025 09:44:19 +0000 (0:00:12.487) 0:05:33.964 ******* 2025-02-10 09:44:59.103570 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:44:59.103578 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:44:59.103586 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:44:59.103595 | orchestrator | 2025-02-10 09:44:59.103604 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-02-10 09:44:59.103613 | orchestrator | Monday 10 February 2025 09:44:33 +0000 (0:00:13.665) 0:05:47.629 ******* 2025-02-10 09:44:59.103621 | orchestrator | changed: [testbed-manager] 2025-02-10 09:44:59.103630 | orchestrator | 2025-02-10 09:44:59.103638 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-02-10 09:44:59.103647 | orchestrator | Monday 10 February 2025 09:44:42 +0000 (0:00:08.927) 0:05:56.557 ******* 2025-02-10 09:44:59.103655 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:44:59.103664 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:44:59.103672 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:44:59.103681 | orchestrator | 2025-02-10 09:44:59.103689 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:44:59.103698 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:44:59.103708 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:44:59.103717 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:44:59.103726 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:44:59.103734 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:44:59.103747 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:45:02.129165 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-10 09:45:02.129308 | orchestrator | 2025-02-10 09:45:02.129328 | orchestrator | 2025-02-10 09:45:02.129344 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:45:02.129361 | orchestrator | Monday 10 February 2025 09:44:56 +0000 (0:00:14.325) 0:06:10.882 ******* 2025-02-10 09:45:02.129375 | orchestrator | =============================================================================== 2025-02-10 09:45:02.129390 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 97.35s 2025-02-10 09:45:02.129404 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 28.28s 2025-02-10 09:45:02.129418 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 26.10s 2025-02-10 09:45:02.129433 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 21.67s 2025-02-10 09:45:02.129446 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.40s 2025-02-10 09:45:02.129460 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 15.36s 2025-02-10 09:45:02.129474 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 14.33s 2025-02-10 09:45:02.129488 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.67s 2025-02-10 09:45:02.129502 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.41s 2025-02-10 09:45:02.129516 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.49s 2025-02-10 09:45:02.129531 | orchestrator | prometheus : Copying over config.json files ---------------------------- 11.37s 2025-02-10 09:45:02.129545 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.93s 2025-02-10 09:45:02.129559 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 8.60s 2025-02-10 09:45:02.129573 | orchestrator | prometheus : Check prometheus containers -------------------------------- 7.13s 2025-02-10 09:45:02.129587 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 6.74s 2025-02-10 09:45:02.129602 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 5.34s 2025-02-10 09:45:02.129644 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 5.22s 2025-02-10 09:45:02.129661 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 5.18s 2025-02-10 09:45:02.129678 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.99s 2025-02-10 09:45:02.129694 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 4.95s 2025-02-10 09:45:02.129710 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:02.129725 | orchestrator | 2025-02-10 09:44:59 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:02.129852 | orchestrator | 2025-02-10 09:44:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:02.129890 | orchestrator | 2025-02-10 09:45:02.131977 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task e47b19e4-1509-40d6-84a4-fd917f91ac34 is in state SUCCESS 2025-02-10 09:45:02.132022 | orchestrator | 2025-02-10 09:45:02.132039 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:45:02.132053 | orchestrator | 2025-02-10 09:45:02.132067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:45:02.132082 | orchestrator | Monday 10 February 2025 09:41:16 +0000 (0:00:00.319) 0:00:00.319 ******* 2025-02-10 09:45:02.132096 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:45:02.132118 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:45:02.132162 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:45:02.132176 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:45:02.132190 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:45:02.132204 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:45:02.132248 | orchestrator | 2025-02-10 09:45:02.132263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:45:02.132277 | orchestrator | Monday 10 February 2025 09:41:16 +0000 (0:00:00.838) 0:00:01.158 ******* 2025-02-10 09:45:02.132291 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-02-10 09:45:02.132305 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-02-10 09:45:02.132319 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-02-10 09:45:02.132333 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-02-10 09:45:02.132347 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-02-10 09:45:02.132362 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-02-10 09:45:02.132376 | orchestrator | 2025-02-10 09:45:02.132484 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-02-10 09:45:02.132715 | orchestrator | 2025-02-10 09:45:02.132732 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:45:02.132746 | orchestrator | Monday 10 February 2025 09:41:17 +0000 (0:00:00.655) 0:00:01.813 ******* 2025-02-10 09:45:02.132760 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:45:02.132775 | orchestrator | 2025-02-10 09:45:02.132789 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-02-10 09:45:02.132804 | orchestrator | Monday 10 February 2025 09:41:18 +0000 (0:00:01.162) 0:00:02.975 ******* 2025-02-10 09:45:02.132819 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-02-10 09:45:02.132833 | orchestrator | 2025-02-10 09:45:02.132847 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-02-10 09:45:02.132861 | orchestrator | Monday 10 February 2025 09:41:21 +0000 (0:00:03.153) 0:00:06.129 ******* 2025-02-10 09:45:02.132875 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-02-10 09:45:02.132890 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-02-10 09:45:02.132904 | orchestrator | 2025-02-10 09:45:02.132942 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-02-10 09:45:02.132961 | orchestrator | Monday 10 February 2025 09:41:28 +0000 (0:00:06.959) 0:00:13.089 ******* 2025-02-10 09:45:02.132976 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:45:02.132989 | orchestrator | 2025-02-10 09:45:02.133003 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-02-10 09:45:02.133017 | orchestrator | Monday 10 February 2025 09:41:32 +0000 (0:00:03.335) 0:00:16.424 ******* 2025-02-10 09:45:02.133031 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:45:02.133045 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-02-10 09:45:02.133059 | orchestrator | 2025-02-10 09:45:02.133149 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-02-10 09:45:02.133181 | orchestrator | Monday 10 February 2025 09:41:36 +0000 (0:00:04.263) 0:00:20.688 ******* 2025-02-10 09:45:02.133195 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:45:02.133210 | orchestrator | 2025-02-10 09:45:02.133224 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-02-10 09:45:02.133237 | orchestrator | Monday 10 February 2025 09:41:40 +0000 (0:00:03.878) 0:00:24.566 ******* 2025-02-10 09:45:02.133475 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-02-10 09:45:02.133493 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-02-10 09:45:02.133507 | orchestrator | 2025-02-10 09:45:02.133522 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-02-10 09:45:02.133536 | orchestrator | Monday 10 February 2025 09:41:49 +0000 (0:00:09.282) 0:00:33.848 ******* 2025-02-10 09:45:02.133567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.133670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.133691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.133706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.133722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.133746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.133839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.133874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.133890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.133905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.134379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.134516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.134539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.134569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.134627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.134687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.134706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.134722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.134857 | orchestrator | 2025-02-10 09:45:02.134871 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:45:02.134886 | orchestrator | Monday 10 February 2025 09:41:53 +0000 (0:00:04.032) 0:00:37.881 ******* 2025-02-10 09:45:02.134900 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.134914 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.134928 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.134942 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:45:02.134957 | orchestrator | 2025-02-10 09:45:02.134971 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-02-10 09:45:02.134985 | orchestrator | Monday 10 February 2025 09:41:55 +0000 (0:00:01.819) 0:00:39.701 ******* 2025-02-10 09:45:02.134999 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-02-10 09:45:02.135013 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-02-10 09:45:02.135027 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-02-10 09:45:02.135041 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-02-10 09:45:02.135055 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-02-10 09:45:02.135069 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-02-10 09:45:02.135082 | orchestrator | 2025-02-10 09:45:02.135097 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-02-10 09:45:02.135218 | orchestrator | Monday 10 February 2025 09:41:59 +0000 (0:00:04.208) 0:00:43.909 ******* 2025-02-10 09:45:02.135241 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:45:02.135260 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:45:02.135334 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:45:02.135353 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:45:02.135368 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:45:02.135394 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-10 09:45:02.135409 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:45:02.135472 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:45:02.135490 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:45:02.135506 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:45:02.135529 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:45:02.135555 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-10 09:45:02.135571 | orchestrator | 2025-02-10 09:45:02.135585 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-02-10 09:45:02.135599 | orchestrator | Monday 10 February 2025 09:42:06 +0000 (0:00:07.113) 0:00:51.022 ******* 2025-02-10 09:45:02.135614 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:02.135661 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:02.135678 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:02.135692 | orchestrator | 2025-02-10 09:45:02.135706 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-02-10 09:45:02.135720 | orchestrator | Monday 10 February 2025 09:42:09 +0000 (0:00:02.973) 0:00:53.997 ******* 2025-02-10 09:45:02.135734 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-02-10 09:45:02.135749 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-02-10 09:45:02.135763 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-02-10 09:45:02.135777 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:45:02.135791 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:45:02.135804 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-02-10 09:45:02.135818 | orchestrator | 2025-02-10 09:45:02.135832 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-02-10 09:45:02.135844 | orchestrator | Monday 10 February 2025 09:42:14 +0000 (0:00:04.586) 0:00:58.583 ******* 2025-02-10 09:45:02.135857 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-02-10 09:45:02.135869 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-02-10 09:45:02.135889 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-02-10 09:45:02.135902 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-02-10 09:45:02.135914 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-02-10 09:45:02.135927 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-02-10 09:45:02.135939 | orchestrator | 2025-02-10 09:45:02.135951 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-02-10 09:45:02.135964 | orchestrator | Monday 10 February 2025 09:42:15 +0000 (0:00:01.190) 0:00:59.774 ******* 2025-02-10 09:45:02.135977 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.135989 | orchestrator | 2025-02-10 09:45:02.136002 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-02-10 09:45:02.136014 | orchestrator | Monday 10 February 2025 09:42:15 +0000 (0:00:00.242) 0:01:00.017 ******* 2025-02-10 09:45:02.136026 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.136038 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.136050 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.136063 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.136076 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.136088 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.136100 | orchestrator | 2025-02-10 09:45:02.136112 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:45:02.136125 | orchestrator | Monday 10 February 2025 09:42:16 +0000 (0:00:00.725) 0:01:00.742 ******* 2025-02-10 09:45:02.136156 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:45:02.136170 | orchestrator | 2025-02-10 09:45:02.136182 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-02-10 09:45:02.136195 | orchestrator | Monday 10 February 2025 09:42:18 +0000 (0:00:02.306) 0:01:03.048 ******* 2025-02-10 09:45:02.136208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.136257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.136273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.136294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.136477 | orchestrator | 2025-02-10 09:45:02.136490 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-02-10 09:45:02.136503 | orchestrator | Monday 10 February 2025 09:42:23 +0000 (0:00:04.465) 0:01:07.513 ******* 2025-02-10 09:45:02.136547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.136569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.136610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136623 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.136636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.136649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136668 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.136681 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.136724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136777 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.136790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136802 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.136815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136885 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.136898 | orchestrator | 2025-02-10 09:45:02.136910 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-02-10 09:45:02.136923 | orchestrator | Monday 10 February 2025 09:42:25 +0000 (0:00:02.316) 0:01:09.830 ******* 2025-02-10 09:45:02.136947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.136974 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.136987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.137018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137032 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.137076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137117 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.137151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.137165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137178 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.137191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.137252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137268 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.137293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137320 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.137332 | orchestrator | 2025-02-10 09:45:02.137345 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-02-10 09:45:02.137357 | orchestrator | Monday 10 February 2025 09:42:28 +0000 (0:00:03.164) 0:01:12.995 ******* 2025-02-10 09:45:02.137370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.137394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.137444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.137485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.137499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.137560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.137634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.137831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.137917 | orchestrator | 2025-02-10 09:45:02.137934 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-02-10 09:45:02.137947 | orchestrator | Monday 10 February 2025 09:42:37 +0000 (0:00:08.699) 0:01:21.694 ******* 2025-02-10 09:45:02.137960 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-02-10 09:45:02.137973 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.137990 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-02-10 09:45:02.138003 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.138055 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-02-10 09:45:02.138071 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.138083 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-10 09:45:02.138096 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-10 09:45:02.138108 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-10 09:45:02.138120 | orchestrator | 2025-02-10 09:45:02.138151 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-02-10 09:45:02.138164 | orchestrator | Monday 10 February 2025 09:42:44 +0000 (0:00:06.928) 0:01:28.622 ******* 2025-02-10 09:45:02.138177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.138198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.138225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.138275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.138307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.138320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.138363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.138598 | orchestrator | 2025-02-10 09:45:02.138611 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-02-10 09:45:02.138623 | orchestrator | Monday 10 February 2025 09:43:08 +0000 (0:00:24.500) 0:01:53.122 ******* 2025-02-10 09:45:02.138636 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.138648 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.138661 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.138673 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:45:02.138685 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:45:02.138698 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:45:02.138716 | orchestrator | 2025-02-10 09:45:02.138729 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-02-10 09:45:02.138741 | orchestrator | Monday 10 February 2025 09:43:13 +0000 (0:00:04.791) 0:01:57.914 ******* 2025-02-10 09:45:02.138766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.138790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.138853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.138928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138940 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.138953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.138983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139003 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.139016 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.139029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.139042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139080 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.139108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.139157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139198 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.139211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.139241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139289 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.139301 | orchestrator | 2025-02-10 09:45:02.139314 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-02-10 09:45:02.139327 | orchestrator | Monday 10 February 2025 09:43:16 +0000 (0:00:02.504) 0:02:00.418 ******* 2025-02-10 09:45:02.139340 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.139352 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.139364 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.139377 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.139397 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.139413 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.139425 | orchestrator | 2025-02-10 09:45:02.139438 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-02-10 09:45:02.139450 | orchestrator | Monday 10 February 2025 09:43:18 +0000 (0:00:02.118) 0:02:02.537 ******* 2025-02-10 09:45:02.139463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.139476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.139531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-10 09:45:02.139557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.139613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.139628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-10 09:45:02.139641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-10 09:45:02.139864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-10 09:45:02.139903 | orchestrator | 2025-02-10 09:45:02.139916 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-10 09:45:02.139928 | orchestrator | Monday 10 February 2025 09:43:23 +0000 (0:00:05.623) 0:02:08.161 ******* 2025-02-10 09:45:02.139940 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.139953 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:02.139965 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:02.139978 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:45:02.139990 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:45:02.140003 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:45:02.140015 | orchestrator | 2025-02-10 09:45:02.140027 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-02-10 09:45:02.140040 | orchestrator | Monday 10 February 2025 09:43:26 +0000 (0:00:02.517) 0:02:10.678 ******* 2025-02-10 09:45:02.140052 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:02.140064 | orchestrator | 2025-02-10 09:45:02.140077 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-02-10 09:45:02.140089 | orchestrator | Monday 10 February 2025 09:43:28 +0000 (0:00:02.544) 0:02:13.222 ******* 2025-02-10 09:45:02.140101 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:02.140113 | orchestrator | 2025-02-10 09:45:02.140126 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-02-10 09:45:02.140156 | orchestrator | Monday 10 February 2025 09:43:32 +0000 (0:00:03.105) 0:02:16.327 ******* 2025-02-10 09:45:02.140169 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:02.140181 | orchestrator | 2025-02-10 09:45:02.140194 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:45:02.140206 | orchestrator | Monday 10 February 2025 09:43:53 +0000 (0:00:21.240) 0:02:37.568 ******* 2025-02-10 09:45:02.140218 | orchestrator | 2025-02-10 09:45:02.140231 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:45:02.140243 | orchestrator | Monday 10 February 2025 09:43:54 +0000 (0:00:01.419) 0:02:38.987 ******* 2025-02-10 09:45:02.140256 | orchestrator | 2025-02-10 09:45:02.140268 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:45:02.140280 | orchestrator | Monday 10 February 2025 09:43:55 +0000 (0:00:00.283) 0:02:39.271 ******* 2025-02-10 09:45:02.140293 | orchestrator | 2025-02-10 09:45:02.140305 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:45:02.140317 | orchestrator | Monday 10 February 2025 09:43:55 +0000 (0:00:00.216) 0:02:39.488 ******* 2025-02-10 09:45:02.140330 | orchestrator | 2025-02-10 09:45:02.140342 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:45:02.140360 | orchestrator | Monday 10 February 2025 09:43:55 +0000 (0:00:00.187) 0:02:39.675 ******* 2025-02-10 09:45:02.140373 | orchestrator | 2025-02-10 09:45:02.140385 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-10 09:45:02.140397 | orchestrator | Monday 10 February 2025 09:43:56 +0000 (0:00:00.971) 0:02:40.647 ******* 2025-02-10 09:45:02.140410 | orchestrator | 2025-02-10 09:45:02.140422 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-02-10 09:45:02.140434 | orchestrator | Monday 10 February 2025 09:43:56 +0000 (0:00:00.246) 0:02:40.894 ******* 2025-02-10 09:45:02.140447 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:02.140459 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:45:02.140472 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:45:02.140484 | orchestrator | 2025-02-10 09:45:02.140496 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-02-10 09:45:02.140509 | orchestrator | Monday 10 February 2025 09:44:22 +0000 (0:00:25.393) 0:03:06.288 ******* 2025-02-10 09:45:02.140521 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:45:02.140533 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:45:02.140545 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:02.140558 | orchestrator | 2025-02-10 09:45:02.140570 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-02-10 09:45:02.140583 | orchestrator | Monday 10 February 2025 09:44:35 +0000 (0:00:13.295) 0:03:19.583 ******* 2025-02-10 09:45:02.140595 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:45:02.140608 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:45:02.140620 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:45:02.140632 | orchestrator | 2025-02-10 09:45:02.140645 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-02-10 09:45:02.140657 | orchestrator | Monday 10 February 2025 09:44:55 +0000 (0:00:20.259) 0:03:39.842 ******* 2025-02-10 09:45:02.140669 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:45:02.140682 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:45:02.140694 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:45:02.140707 | orchestrator | 2025-02-10 09:45:02.140724 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-02-10 09:45:02.140737 | orchestrator | Monday 10 February 2025 09:45:00 +0000 (0:00:05.290) 0:03:45.133 ******* 2025-02-10 09:45:02.140749 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:02.140762 | orchestrator | 2025-02-10 09:45:02.140776 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:45:02.140796 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-10 09:45:02.140817 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-10 09:45:02.140837 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-10 09:45:02.140856 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:45:02.140885 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:45:05.199541 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:45:05.199694 | orchestrator | 2025-02-10 09:45:05.199714 | orchestrator | 2025-02-10 09:45:05.199730 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:45:05.199747 | orchestrator | Monday 10 February 2025 09:45:01 +0000 (0:00:00.562) 0:03:45.695 ******* 2025-02-10 09:45:05.199762 | orchestrator | =============================================================================== 2025-02-10 09:45:05.199812 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.39s 2025-02-10 09:45:05.199828 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 24.50s 2025-02-10 09:45:05.199843 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.24s 2025-02-10 09:45:05.199857 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 20.26s 2025-02-10 09:45:05.199872 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.30s 2025-02-10 09:45:05.199887 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.28s 2025-02-10 09:45:05.199902 | orchestrator | cinder : Copying over config.json files for services -------------------- 8.70s 2025-02-10 09:45:05.199917 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 7.11s 2025-02-10 09:45:05.199932 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.96s 2025-02-10 09:45:05.199946 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 6.93s 2025-02-10 09:45:05.199961 | orchestrator | cinder : Check cinder containers ---------------------------------------- 5.62s 2025-02-10 09:45:05.199975 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.29s 2025-02-10 09:45:05.199990 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 4.79s 2025-02-10 09:45:05.200004 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.59s 2025-02-10 09:45:05.200019 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.47s 2025-02-10 09:45:05.200033 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.26s 2025-02-10 09:45:05.200047 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.21s 2025-02-10 09:45:05.200062 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.03s 2025-02-10 09:45:05.200077 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.88s 2025-02-10 09:45:05.200091 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.34s 2025-02-10 09:45:05.200109 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:05.200156 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:05.200208 | orchestrator | 2025-02-10 09:45:02 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:05.200233 | orchestrator | 2025-02-10 09:45:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:05.200268 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:05.201880 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:05.201912 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:05.201934 | orchestrator | 2025-02-10 09:45:05 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:08.248674 | orchestrator | 2025-02-10 09:45:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:08.248801 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:08.249269 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:08.250071 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:08.250921 | orchestrator | 2025-02-10 09:45:08 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:11.306483 | orchestrator | 2025-02-10 09:45:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:11.306662 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:11.307577 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:11.307610 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:11.309584 | orchestrator | 2025-02-10 09:45:11 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:11.310614 | orchestrator | 2025-02-10 09:45:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:14.361427 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:14.361713 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:14.361872 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:14.362350 | orchestrator | 2025-02-10 09:45:14 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:17.401830 | orchestrator | 2025-02-10 09:45:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:17.401949 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:17.402552 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:17.402568 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:17.403375 | orchestrator | 2025-02-10 09:45:17 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:20.446520 | orchestrator | 2025-02-10 09:45:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:20.446685 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:20.447032 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:20.447069 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:20.447797 | orchestrator | 2025-02-10 09:45:20 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:23.496190 | orchestrator | 2025-02-10 09:45:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:23.496372 | orchestrator | 2025-02-10 09:45:23 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:23.496704 | orchestrator | 2025-02-10 09:45:23 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:23.496738 | orchestrator | 2025-02-10 09:45:23 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:23.496764 | orchestrator | 2025-02-10 09:45:23 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:26.542431 | orchestrator | 2025-02-10 09:45:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:26.542719 | orchestrator | 2025-02-10 09:45:26 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:29.580324 | orchestrator | 2025-02-10 09:45:26 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:29.580465 | orchestrator | 2025-02-10 09:45:26 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:29.580517 | orchestrator | 2025-02-10 09:45:26 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:29.580534 | orchestrator | 2025-02-10 09:45:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:29.580568 | orchestrator | 2025-02-10 09:45:29 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:29.581886 | orchestrator | 2025-02-10 09:45:29 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:29.581921 | orchestrator | 2025-02-10 09:45:29 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:29.583441 | orchestrator | 2025-02-10 09:45:29 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:29.585876 | orchestrator | 2025-02-10 09:45:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:32.631825 | orchestrator | 2025-02-10 09:45:32 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:32.632083 | orchestrator | 2025-02-10 09:45:32 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:32.632113 | orchestrator | 2025-02-10 09:45:32 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:32.632165 | orchestrator | 2025-02-10 09:45:32 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:35.674696 | orchestrator | 2025-02-10 09:45:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:35.674887 | orchestrator | 2025-02-10 09:45:35 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:35.675715 | orchestrator | 2025-02-10 09:45:35 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:35.679375 | orchestrator | 2025-02-10 09:45:35 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:35.680380 | orchestrator | 2025-02-10 09:45:35 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:35.680556 | orchestrator | 2025-02-10 09:45:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:38.713264 | orchestrator | 2025-02-10 09:45:38 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:38.714987 | orchestrator | 2025-02-10 09:45:38 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:38.715044 | orchestrator | 2025-02-10 09:45:38 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:38.716249 | orchestrator | 2025-02-10 09:45:38 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:41.768872 | orchestrator | 2025-02-10 09:45:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:41.769093 | orchestrator | 2025-02-10 09:45:41 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:41.771904 | orchestrator | 2025-02-10 09:45:41 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:41.774499 | orchestrator | 2025-02-10 09:45:41 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:41.774544 | orchestrator | 2025-02-10 09:45:41 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:44.819234 | orchestrator | 2025-02-10 09:45:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:44.819468 | orchestrator | 2025-02-10 09:45:44 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:44.823116 | orchestrator | 2025-02-10 09:45:44 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:44.823212 | orchestrator | 2025-02-10 09:45:44 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:44.825434 | orchestrator | 2025-02-10 09:45:44 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:47.854178 | orchestrator | 2025-02-10 09:45:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:47.854332 | orchestrator | 2025-02-10 09:45:47 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:47.854847 | orchestrator | 2025-02-10 09:45:47 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:47.855930 | orchestrator | 2025-02-10 09:45:47 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:47.856878 | orchestrator | 2025-02-10 09:45:47 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:50.893028 | orchestrator | 2025-02-10 09:45:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:50.893219 | orchestrator | 2025-02-10 09:45:50 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state STARTED 2025-02-10 09:45:50.893678 | orchestrator | 2025-02-10 09:45:50 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:50.895979 | orchestrator | 2025-02-10 09:45:50 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:50.897184 | orchestrator | 2025-02-10 09:45:50 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state STARTED 2025-02-10 09:45:53.953467 | orchestrator | 2025-02-10 09:45:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:53.953638 | orchestrator | 2025-02-10 09:45:53 | INFO  | Task c010d313-574f-47d2-8bc2-ead04ef5137a is in state SUCCESS 2025-02-10 09:45:53.954969 | orchestrator | 2025-02-10 09:45:53.955011 | orchestrator | 2025-02-10 09:45:53.955027 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:45:53.955042 | orchestrator | 2025-02-10 09:45:53.955056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:45:53.955070 | orchestrator | Monday 10 February 2025 09:41:06 +0000 (0:00:01.241) 0:00:01.241 ******* 2025-02-10 09:45:53.955085 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:45:53.955100 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:45:53.955114 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:45:53.955129 | orchestrator | 2025-02-10 09:45:53.955527 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:45:53.955550 | orchestrator | Monday 10 February 2025 09:41:06 +0000 (0:00:00.731) 0:00:01.973 ******* 2025-02-10 09:45:53.955565 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-02-10 09:45:53.955580 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-02-10 09:45:53.955594 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-02-10 09:45:53.955608 | orchestrator | 2025-02-10 09:45:53.955622 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-02-10 09:45:53.955636 | orchestrator | 2025-02-10 09:45:53.955650 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:45:53.955665 | orchestrator | Monday 10 February 2025 09:41:07 +0000 (0:00:00.663) 0:00:02.636 ******* 2025-02-10 09:45:53.955679 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:45:53.955694 | orchestrator | 2025-02-10 09:45:53.955708 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-02-10 09:45:53.955722 | orchestrator | Monday 10 February 2025 09:41:08 +0000 (0:00:00.623) 0:00:03.259 ******* 2025-02-10 09:45:53.955768 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-02-10 09:45:53.955782 | orchestrator | 2025-02-10 09:45:53.955796 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-02-10 09:45:53.955810 | orchestrator | Monday 10 February 2025 09:41:11 +0000 (0:00:03.671) 0:00:06.931 ******* 2025-02-10 09:45:53.955824 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-02-10 09:45:53.955838 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-02-10 09:45:53.955852 | orchestrator | 2025-02-10 09:45:53.955866 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-02-10 09:45:53.955880 | orchestrator | Monday 10 February 2025 09:41:18 +0000 (0:00:06.486) 0:00:13.417 ******* 2025-02-10 09:45:53.955910 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:45:53.955926 | orchestrator | 2025-02-10 09:45:53.955940 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-02-10 09:45:53.955954 | orchestrator | Monday 10 February 2025 09:41:21 +0000 (0:00:03.025) 0:00:16.442 ******* 2025-02-10 09:45:53.955969 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:45:53.955983 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-02-10 09:45:53.955997 | orchestrator | 2025-02-10 09:45:53.956011 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-02-10 09:45:53.956025 | orchestrator | Monday 10 February 2025 09:41:25 +0000 (0:00:03.760) 0:00:20.203 ******* 2025-02-10 09:45:53.956039 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:45:53.956053 | orchestrator | 2025-02-10 09:45:53.956067 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-02-10 09:45:53.956081 | orchestrator | Monday 10 February 2025 09:41:28 +0000 (0:00:03.332) 0:00:23.535 ******* 2025-02-10 09:45:53.956094 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-02-10 09:45:53.956108 | orchestrator | 2025-02-10 09:45:53.956122 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-02-10 09:45:53.956136 | orchestrator | Monday 10 February 2025 09:41:32 +0000 (0:00:04.168) 0:00:27.704 ******* 2025-02-10 09:45:53.956239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.956289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.956325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.956358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.956407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.956450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.956472 | orchestrator | 2025-02-10 09:45:53.956504 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:45:53.956519 | orchestrator | Monday 10 February 2025 09:41:41 +0000 (0:00:08.542) 0:00:36.246 ******* 2025-02-10 09:45:53.956533 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:45:53.956547 | orchestrator | 2025-02-10 09:45:53.956561 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-02-10 09:45:53.956575 | orchestrator | Monday 10 February 2025 09:41:43 +0000 (0:00:02.189) 0:00:38.436 ******* 2025-02-10 09:45:53.956589 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:45:53.956604 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.956618 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:45:53.956632 | orchestrator | 2025-02-10 09:45:53.956646 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-02-10 09:45:53.956661 | orchestrator | Monday 10 February 2025 09:41:57 +0000 (0:00:13.760) 0:00:52.196 ******* 2025-02-10 09:45:53.956675 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:53.956690 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:53.956704 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:53.956719 | orchestrator | 2025-02-10 09:45:53.956733 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-02-10 09:45:53.956747 | orchestrator | Monday 10 February 2025 09:41:59 +0000 (0:00:02.707) 0:00:54.903 ******* 2025-02-10 09:45:53.956761 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:53.956787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:53.956802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-10 09:45:53.956816 | orchestrator | 2025-02-10 09:45:53.956997 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-02-10 09:45:53.957023 | orchestrator | Monday 10 February 2025 09:42:02 +0000 (0:00:02.911) 0:00:57.815 ******* 2025-02-10 09:45:53.957037 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:45:53.957052 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:45:53.957066 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:45:53.957080 | orchestrator | 2025-02-10 09:45:53.957095 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-02-10 09:45:53.957109 | orchestrator | Monday 10 February 2025 09:42:03 +0000 (0:00:01.028) 0:00:58.843 ******* 2025-02-10 09:45:53.957123 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.957137 | orchestrator | 2025-02-10 09:45:53.957172 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-02-10 09:45:53.957197 | orchestrator | Monday 10 February 2025 09:42:04 +0000 (0:00:00.305) 0:00:59.149 ******* 2025-02-10 09:45:53.957211 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.957225 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.957239 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.957253 | orchestrator | 2025-02-10 09:45:53.957267 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:45:53.957288 | orchestrator | Monday 10 February 2025 09:42:04 +0000 (0:00:00.374) 0:00:59.523 ******* 2025-02-10 09:45:53.957302 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:45:53.957316 | orchestrator | 2025-02-10 09:45:53.957330 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-02-10 09:45:53.957345 | orchestrator | Monday 10 February 2025 09:42:06 +0000 (0:00:02.221) 0:01:01.745 ******* 2025-02-10 09:45:53.957386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.957403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.957448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.957464 | orchestrator | 2025-02-10 09:45:53.957478 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-02-10 09:45:53.957492 | orchestrator | Monday 10 February 2025 09:42:15 +0000 (0:00:08.376) 0:01:10.121 ******* 2025-02-10 09:45:53.957506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:45:53.957528 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.957550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:45:53.957584 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.957600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:45:53.957615 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.957632 | orchestrator | 2025-02-10 09:45:53.957647 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-02-10 09:45:53.957663 | orchestrator | Monday 10 February 2025 09:42:20 +0000 (0:00:05.110) 0:01:15.232 ******* 2025-02-10 09:45:53.957686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:45:53.957720 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.957743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:45:53.957760 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.957777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-10 09:45:53.957814 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.957830 | orchestrator | 2025-02-10 09:45:53.957845 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-02-10 09:45:53.957861 | orchestrator | Monday 10 February 2025 09:42:27 +0000 (0:00:07.296) 0:01:22.528 ******* 2025-02-10 09:45:53.957877 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.957892 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.957908 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.957924 | orchestrator | 2025-02-10 09:45:53.957941 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-02-10 09:45:53.957957 | orchestrator | Monday 10 February 2025 09:42:41 +0000 (0:00:14.143) 0:01:36.672 ******* 2025-02-10 09:45:53.957981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.957999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.958132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.958169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.958213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.958229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.958262 | orchestrator | 2025-02-10 09:45:53.958281 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-02-10 09:45:53.958303 | orchestrator | Monday 10 February 2025 09:42:54 +0000 (0:00:12.954) 0:01:49.626 ******* 2025-02-10 09:45:53.958322 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.958337 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:45:53.958351 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:45:53.958484 | orchestrator | 2025-02-10 09:45:53.958503 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-02-10 09:45:53.958518 | orchestrator | Monday 10 February 2025 09:43:19 +0000 (0:00:24.675) 0:02:14.301 ******* 2025-02-10 09:45:53.958533 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.958547 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.958562 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.958577 | orchestrator | 2025-02-10 09:45:53.958591 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-02-10 09:45:53.958606 | orchestrator | Monday 10 February 2025 09:43:36 +0000 (0:00:17.467) 0:02:31.769 ******* 2025-02-10 09:45:53.958621 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.958636 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.958651 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.958665 | orchestrator | 2025-02-10 09:45:53.958680 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-02-10 09:45:53.958695 | orchestrator | Monday 10 February 2025 09:43:44 +0000 (0:00:08.207) 0:02:39.977 ******* 2025-02-10 09:45:53.958709 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.958724 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.958739 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.958753 | orchestrator | 2025-02-10 09:45:53.958768 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-02-10 09:45:53.958782 | orchestrator | Monday 10 February 2025 09:44:07 +0000 (0:00:23.048) 0:03:03.026 ******* 2025-02-10 09:45:53.958797 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.958811 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.958826 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.958918 | orchestrator | 2025-02-10 09:45:53.958935 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-02-10 09:45:53.958958 | orchestrator | Monday 10 February 2025 09:44:16 +0000 (0:00:08.943) 0:03:11.969 ******* 2025-02-10 09:45:53.958973 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.958988 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.959001 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.959015 | orchestrator | 2025-02-10 09:45:53.959029 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-02-10 09:45:53.959043 | orchestrator | Monday 10 February 2025 09:44:17 +0000 (0:00:00.392) 0:03:12.361 ******* 2025-02-10 09:45:53.959058 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-02-10 09:45:53.959072 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.959087 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-02-10 09:45:53.959101 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.959118 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-02-10 09:45:53.959133 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.959181 | orchestrator | 2025-02-10 09:45:53.959197 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-02-10 09:45:53.959221 | orchestrator | Monday 10 February 2025 09:44:22 +0000 (0:00:05.480) 0:03:17.842 ******* 2025-02-10 09:45:53.959237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.959262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.959295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.959330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.959366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-10 09:45:53.959390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-10 09:45:53.959405 | orchestrator | 2025-02-10 09:45:53.959419 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-10 09:45:53.959433 | orchestrator | Monday 10 February 2025 09:44:32 +0000 (0:00:09.440) 0:03:27.283 ******* 2025-02-10 09:45:53.959447 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:45:53.959461 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:45:53.959475 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:45:53.959489 | orchestrator | 2025-02-10 09:45:53.959503 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-02-10 09:45:53.959517 | orchestrator | Monday 10 February 2025 09:44:32 +0000 (0:00:00.383) 0:03:27.667 ******* 2025-02-10 09:45:53.959531 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.959546 | orchestrator | 2025-02-10 09:45:53.959566 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-02-10 09:45:53.959585 | orchestrator | Monday 10 February 2025 09:44:34 +0000 (0:00:02.243) 0:03:29.910 ******* 2025-02-10 09:45:53.959620 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.959634 | orchestrator | 2025-02-10 09:45:53.959649 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-02-10 09:45:53.959663 | orchestrator | Monday 10 February 2025 09:44:37 +0000 (0:00:02.742) 0:03:32.652 ******* 2025-02-10 09:45:53.959677 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.959691 | orchestrator | 2025-02-10 09:45:53.959705 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-02-10 09:45:53.959719 | orchestrator | Monday 10 February 2025 09:44:40 +0000 (0:00:03.312) 0:03:35.965 ******* 2025-02-10 09:45:53.959733 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.959747 | orchestrator | 2025-02-10 09:45:53.959761 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-02-10 09:45:53.959775 | orchestrator | Monday 10 February 2025 09:45:07 +0000 (0:00:26.132) 0:04:02.097 ******* 2025-02-10 09:45:53.959789 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.959803 | orchestrator | 2025-02-10 09:45:53.959817 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-02-10 09:45:53.959831 | orchestrator | Monday 10 February 2025 09:45:09 +0000 (0:00:02.542) 0:04:04.640 ******* 2025-02-10 09:45:53.959845 | orchestrator | 2025-02-10 09:45:53.959859 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-02-10 09:45:53.959873 | orchestrator | Monday 10 February 2025 09:45:09 +0000 (0:00:00.078) 0:04:04.719 ******* 2025-02-10 09:45:53.959887 | orchestrator | 2025-02-10 09:45:53.959901 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-02-10 09:45:53.959915 | orchestrator | Monday 10 February 2025 09:45:09 +0000 (0:00:00.221) 0:04:04.940 ******* 2025-02-10 09:45:53.959929 | orchestrator | 2025-02-10 09:45:53.959943 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-02-10 09:45:53.959957 | orchestrator | Monday 10 February 2025 09:45:09 +0000 (0:00:00.080) 0:04:05.021 ******* 2025-02-10 09:45:53.959971 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:45:53.959985 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:45:53.959998 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:45:53.960012 | orchestrator | 2025-02-10 09:45:53.960026 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:45:53.960042 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-02-10 09:45:53.960058 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-02-10 09:45:53.960072 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-02-10 09:45:53.960087 | orchestrator | 2025-02-10 09:45:53.960101 | orchestrator | 2025-02-10 09:45:53.960115 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:45:53.960129 | orchestrator | Monday 10 February 2025 09:45:51 +0000 (0:00:41.808) 0:04:46.829 ******* 2025-02-10 09:45:53.960214 | orchestrator | =============================================================================== 2025-02-10 09:45:53.960231 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.81s 2025-02-10 09:45:53.960245 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.13s 2025-02-10 09:45:53.960259 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 24.68s 2025-02-10 09:45:53.960273 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 23.05s 2025-02-10 09:45:53.960287 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 17.47s 2025-02-10 09:45:53.960300 | orchestrator | glance : Creating TLS backend PEM File --------------------------------- 14.14s 2025-02-10 09:45:53.960314 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 13.76s 2025-02-10 09:45:53.960328 | orchestrator | glance : Copying over config.json files for services ------------------- 12.95s 2025-02-10 09:45:53.960350 | orchestrator | glance : Check glance containers ---------------------------------------- 9.44s 2025-02-10 09:45:53.960364 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 8.94s 2025-02-10 09:45:53.960378 | orchestrator | glance : Ensuring config directories exist ------------------------------ 8.54s 2025-02-10 09:45:53.960392 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 8.38s 2025-02-10 09:45:53.960406 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 8.21s 2025-02-10 09:45:53.960425 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 7.30s 2025-02-10 09:45:53.960440 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.49s 2025-02-10 09:45:53.960453 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.48s 2025-02-10 09:45:53.960468 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.11s 2025-02-10 09:45:53.960481 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.17s 2025-02-10 09:45:53.960495 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.76s 2025-02-10 09:45:53.960509 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.67s 2025-02-10 09:45:53.960523 | orchestrator | 2025-02-10 09:45:53 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:53.960543 | orchestrator | 2025-02-10 09:45:53 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:45:57.005036 | orchestrator | 2025-02-10 09:45:53 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:57.005231 | orchestrator | 2025-02-10 09:45:53 | INFO  | Task 02321dbd-52ce-493a-952d-68d92ac702a1 is in state SUCCESS 2025-02-10 09:45:57.005253 | orchestrator | 2025-02-10 09:45:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:45:57.005288 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:45:57.005509 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:45:57.006550 | orchestrator | 2025-02-10 09:45:57 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:45:57.006661 | orchestrator | 2025-02-10 09:45:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:00.058812 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:00.059602 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:00.060549 | orchestrator | 2025-02-10 09:46:00 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:00.061360 | orchestrator | 2025-02-10 09:46:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:03.098809 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:06.129595 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:06.129742 | orchestrator | 2025-02-10 09:46:03 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:06.129763 | orchestrator | 2025-02-10 09:46:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:06.129796 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:06.129961 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:06.131432 | orchestrator | 2025-02-10 09:46:06 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:09.171917 | orchestrator | 2025-02-10 09:46:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:09.172075 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:09.172252 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:09.172410 | orchestrator | 2025-02-10 09:46:09 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:12.210952 | orchestrator | 2025-02-10 09:46:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:12.211134 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:12.211647 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:15.275495 | orchestrator | 2025-02-10 09:46:12 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:15.275660 | orchestrator | 2025-02-10 09:46:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:15.275714 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:15.277237 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:15.280431 | orchestrator | 2025-02-10 09:46:15 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:18.322585 | orchestrator | 2025-02-10 09:46:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:18.322849 | orchestrator | 2025-02-10 09:46:18 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:18.324280 | orchestrator | 2025-02-10 09:46:18 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:18.324341 | orchestrator | 2025-02-10 09:46:18 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:21.372780 | orchestrator | 2025-02-10 09:46:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:21.372961 | orchestrator | 2025-02-10 09:46:21 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:21.373938 | orchestrator | 2025-02-10 09:46:21 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:21.376002 | orchestrator | 2025-02-10 09:46:21 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:24.415975 | orchestrator | 2025-02-10 09:46:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:24.416138 | orchestrator | 2025-02-10 09:46:24 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:24.417606 | orchestrator | 2025-02-10 09:46:24 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:24.421661 | orchestrator | 2025-02-10 09:46:24 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:27.471296 | orchestrator | 2025-02-10 09:46:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:27.471461 | orchestrator | 2025-02-10 09:46:27 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:27.471905 | orchestrator | 2025-02-10 09:46:27 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:27.473056 | orchestrator | 2025-02-10 09:46:27 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:30.524335 | orchestrator | 2025-02-10 09:46:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:30.524506 | orchestrator | 2025-02-10 09:46:30 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:30.525262 | orchestrator | 2025-02-10 09:46:30 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:30.525308 | orchestrator | 2025-02-10 09:46:30 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:33.567454 | orchestrator | 2025-02-10 09:46:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:33.567610 | orchestrator | 2025-02-10 09:46:33 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:33.568946 | orchestrator | 2025-02-10 09:46:33 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:33.570993 | orchestrator | 2025-02-10 09:46:33 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:33.571118 | orchestrator | 2025-02-10 09:46:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:36.608356 | orchestrator | 2025-02-10 09:46:36 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:39.683283 | orchestrator | 2025-02-10 09:46:36 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:39.683513 | orchestrator | 2025-02-10 09:46:36 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:39.683538 | orchestrator | 2025-02-10 09:46:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:39.683574 | orchestrator | 2025-02-10 09:46:39 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:42.746285 | orchestrator | 2025-02-10 09:46:39 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:42.746426 | orchestrator | 2025-02-10 09:46:39 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:42.746447 | orchestrator | 2025-02-10 09:46:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:42.746482 | orchestrator | 2025-02-10 09:46:42 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:42.746896 | orchestrator | 2025-02-10 09:46:42 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:45.799884 | orchestrator | 2025-02-10 09:46:42 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:45.800007 | orchestrator | 2025-02-10 09:46:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:45.800039 | orchestrator | 2025-02-10 09:46:45 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:45.807747 | orchestrator | 2025-02-10 09:46:45 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:45.808613 | orchestrator | 2025-02-10 09:46:45 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:48.852073 | orchestrator | 2025-02-10 09:46:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:48.852372 | orchestrator | 2025-02-10 09:46:48 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:48.856032 | orchestrator | 2025-02-10 09:46:48 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:48.856099 | orchestrator | 2025-02-10 09:46:48 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:51.891127 | orchestrator | 2025-02-10 09:46:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:51.891398 | orchestrator | 2025-02-10 09:46:51 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:51.891942 | orchestrator | 2025-02-10 09:46:51 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:51.891972 | orchestrator | 2025-02-10 09:46:51 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:54.937540 | orchestrator | 2025-02-10 09:46:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:54.937697 | orchestrator | 2025-02-10 09:46:54 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:54.939733 | orchestrator | 2025-02-10 09:46:54 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:46:54.942475 | orchestrator | 2025-02-10 09:46:54 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:46:57.979591 | orchestrator | 2025-02-10 09:46:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:46:57.979752 | orchestrator | 2025-02-10 09:46:57 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:46:57.980512 | orchestrator | 2025-02-10 09:46:57 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:01.010976 | orchestrator | 2025-02-10 09:46:57 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:01.011097 | orchestrator | 2025-02-10 09:46:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:01.011133 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:01.011534 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:01.012407 | orchestrator | 2025-02-10 09:47:01 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:04.061001 | orchestrator | 2025-02-10 09:47:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:04.061248 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:04.062272 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:04.064591 | orchestrator | 2025-02-10 09:47:04 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:07.110998 | orchestrator | 2025-02-10 09:47:04 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:07.111219 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:07.111611 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:07.112485 | orchestrator | 2025-02-10 09:47:07 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:10.152697 | orchestrator | 2025-02-10 09:47:07 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:10.152921 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:10.154596 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:10.155509 | orchestrator | 2025-02-10 09:47:10 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:13.205006 | orchestrator | 2025-02-10 09:47:10 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:13.205255 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:13.208079 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:13.208126 | orchestrator | 2025-02-10 09:47:13 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:16.258362 | orchestrator | 2025-02-10 09:47:13 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:16.258515 | orchestrator | 2025-02-10 09:47:16 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:16.258950 | orchestrator | 2025-02-10 09:47:16 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:16.260198 | orchestrator | 2025-02-10 09:47:16 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:19.302989 | orchestrator | 2025-02-10 09:47:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:19.303196 | orchestrator | 2025-02-10 09:47:19 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:19.304688 | orchestrator | 2025-02-10 09:47:19 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:19.304728 | orchestrator | 2025-02-10 09:47:19 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:22.348307 | orchestrator | 2025-02-10 09:47:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:22.348468 | orchestrator | 2025-02-10 09:47:22 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:22.349676 | orchestrator | 2025-02-10 09:47:22 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:22.350631 | orchestrator | 2025-02-10 09:47:22 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:25.410811 | orchestrator | 2025-02-10 09:47:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:25.411005 | orchestrator | 2025-02-10 09:47:25 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:25.411705 | orchestrator | 2025-02-10 09:47:25 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:25.417549 | orchestrator | 2025-02-10 09:47:25 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:28.457852 | orchestrator | 2025-02-10 09:47:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:28.457985 | orchestrator | 2025-02-10 09:47:28 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:28.459235 | orchestrator | 2025-02-10 09:47:28 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:28.459300 | orchestrator | 2025-02-10 09:47:28 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:31.527205 | orchestrator | 2025-02-10 09:47:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:31.527537 | orchestrator | 2025-02-10 09:47:31 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:31.529499 | orchestrator | 2025-02-10 09:47:31 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:31.529547 | orchestrator | 2025-02-10 09:47:31 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:34.568054 | orchestrator | 2025-02-10 09:47:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:34.568268 | orchestrator | 2025-02-10 09:47:34 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:34.569703 | orchestrator | 2025-02-10 09:47:34 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:37.620036 | orchestrator | 2025-02-10 09:47:34 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:37.620234 | orchestrator | 2025-02-10 09:47:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:37.620277 | orchestrator | 2025-02-10 09:47:37 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:37.622127 | orchestrator | 2025-02-10 09:47:37 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:37.622224 | orchestrator | 2025-02-10 09:47:37 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:40.659825 | orchestrator | 2025-02-10 09:47:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:40.659980 | orchestrator | 2025-02-10 09:47:40 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:40.661722 | orchestrator | 2025-02-10 09:47:40 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:40.661765 | orchestrator | 2025-02-10 09:47:40 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:43.692417 | orchestrator | 2025-02-10 09:47:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:43.693695 | orchestrator | 2025-02-10 09:47:43 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:46.734462 | orchestrator | 2025-02-10 09:47:43 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:46.734603 | orchestrator | 2025-02-10 09:47:43 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:46.734626 | orchestrator | 2025-02-10 09:47:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:46.734663 | orchestrator | 2025-02-10 09:47:46 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:46.735051 | orchestrator | 2025-02-10 09:47:46 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:46.737318 | orchestrator | 2025-02-10 09:47:46 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:49.791914 | orchestrator | 2025-02-10 09:47:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:49.792087 | orchestrator | 2025-02-10 09:47:49 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:49.794553 | orchestrator | 2025-02-10 09:47:49 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:49.797966 | orchestrator | 2025-02-10 09:47:49 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:52.850960 | orchestrator | 2025-02-10 09:47:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:52.851128 | orchestrator | 2025-02-10 09:47:52 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:52.851768 | orchestrator | 2025-02-10 09:47:52 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:52.851809 | orchestrator | 2025-02-10 09:47:52 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:55.895404 | orchestrator | 2025-02-10 09:47:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:55.896436 | orchestrator | 2025-02-10 09:47:55 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:55.897442 | orchestrator | 2025-02-10 09:47:55 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state STARTED 2025-02-10 09:47:55.898688 | orchestrator | 2025-02-10 09:47:55 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:47:58.941783 | orchestrator | 2025-02-10 09:47:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:47:58.941966 | orchestrator | 2025-02-10 09:47:58 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:47:58.942434 | orchestrator | 2025-02-10 09:47:58 | INFO  | Task 9281515c-eb30-48b0-9a94-b6e2e31229cd is in state SUCCESS 2025-02-10 09:47:58.944262 | orchestrator | 2025-02-10 09:47:58.944301 | orchestrator | 2025-02-10 09:47:58.944316 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:47:58.944331 | orchestrator | 2025-02-10 09:47:58.944345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:47:58.944360 | orchestrator | Monday 10 February 2025 09:45:00 +0000 (0:00:00.265) 0:00:00.265 ******* 2025-02-10 09:47:58.944374 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:58.944389 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:58.944403 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:58.944417 | orchestrator | 2025-02-10 09:47:58.944431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:47:58.944445 | orchestrator | Monday 10 February 2025 09:45:00 +0000 (0:00:00.445) 0:00:00.711 ******* 2025-02-10 09:47:58.944459 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-02-10 09:47:58.944474 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-02-10 09:47:58.944488 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-02-10 09:47:58.944502 | orchestrator | 2025-02-10 09:47:58.944515 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-02-10 09:47:58.944529 | orchestrator | 2025-02-10 09:47:58.944543 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-02-10 09:47:58.944557 | orchestrator | Monday 10 February 2025 09:45:01 +0000 (0:00:00.554) 0:00:01.266 ******* 2025-02-10 09:47:58.944571 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:58.944586 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:58.944600 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:58.944614 | orchestrator | 2025-02-10 09:47:58.944628 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:47:58.944644 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:47:58.944660 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:47:58.944674 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:47:58.944688 | orchestrator | 2025-02-10 09:47:58.944702 | orchestrator | 2025-02-10 09:47:58.944717 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:47:58.944731 | orchestrator | Monday 10 February 2025 09:45:52 +0000 (0:00:50.820) 0:00:52.087 ******* 2025-02-10 09:47:58.944745 | orchestrator | =============================================================================== 2025-02-10 09:47:58.944759 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 50.82s 2025-02-10 09:47:58.944772 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-02-10 09:47:58.944786 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-02-10 09:47:58.944800 | orchestrator | 2025-02-10 09:47:58.944814 | orchestrator | 2025-02-10 09:47:58.945369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:47:58.945386 | orchestrator | 2025-02-10 09:47:58.945400 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:47:58.945444 | orchestrator | Monday 10 February 2025 09:45:56 +0000 (0:00:00.398) 0:00:00.398 ******* 2025-02-10 09:47:58.945458 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:58.945472 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:58.945487 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:58.945501 | orchestrator | 2025-02-10 09:47:58.945515 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:47:58.945529 | orchestrator | Monday 10 February 2025 09:45:56 +0000 (0:00:00.621) 0:00:01.020 ******* 2025-02-10 09:47:58.945543 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-02-10 09:47:58.945626 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-02-10 09:47:58.945646 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-02-10 09:47:58.945661 | orchestrator | 2025-02-10 09:47:58.945674 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-02-10 09:47:58.945688 | orchestrator | 2025-02-10 09:47:58.945702 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-10 09:47:58.945716 | orchestrator | Monday 10 February 2025 09:45:57 +0000 (0:00:00.659) 0:00:01.680 ******* 2025-02-10 09:47:58.945731 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:47:58.945745 | orchestrator | 2025-02-10 09:47:58.945759 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-02-10 09:47:58.946110 | orchestrator | Monday 10 February 2025 09:45:58 +0000 (0:00:00.834) 0:00:02.515 ******* 2025-02-10 09:47:58.946141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.946256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.946280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.946295 | orchestrator | 2025-02-10 09:47:58.946310 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-02-10 09:47:58.946325 | orchestrator | Monday 10 February 2025 09:45:59 +0000 (0:00:01.317) 0:00:03.832 ******* 2025-02-10 09:47:58.946339 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-02-10 09:47:58.946355 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-02-10 09:47:58.946384 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:47:58.946398 | orchestrator | 2025-02-10 09:47:58.946412 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-10 09:47:58.946426 | orchestrator | Monday 10 February 2025 09:46:00 +0000 (0:00:00.721) 0:00:04.554 ******* 2025-02-10 09:47:58.946441 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:47:58.946455 | orchestrator | 2025-02-10 09:47:58.946469 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-02-10 09:47:58.946483 | orchestrator | Monday 10 February 2025 09:46:01 +0000 (0:00:00.830) 0:00:05.384 ******* 2025-02-10 09:47:58.946498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.946514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.946529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.946544 | orchestrator | 2025-02-10 09:47:58.946558 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-02-10 09:47:58.946605 | orchestrator | Monday 10 February 2025 09:46:02 +0000 (0:00:01.525) 0:00:06.909 ******* 2025-02-10 09:47:58.946623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:47:58.946639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:47:58.946661 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:58.946675 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.946689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:47:58.946704 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.946718 | orchestrator | 2025-02-10 09:47:58.946732 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-02-10 09:47:58.946746 | orchestrator | Monday 10 February 2025 09:46:03 +0000 (0:00:00.467) 0:00:07.377 ******* 2025-02-10 09:47:58.946801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:47:58.946818 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:58.946839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:47:58.946870 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.946921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-10 09:47:58.946938 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.946953 | orchestrator | 2025-02-10 09:47:58.946968 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-02-10 09:47:58.946991 | orchestrator | Monday 10 February 2025 09:46:04 +0000 (0:00:00.997) 0:00:08.375 ******* 2025-02-10 09:47:58.947007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.947021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.947050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.947066 | orchestrator | 2025-02-10 09:47:58.947080 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-02-10 09:47:58.947095 | orchestrator | Monday 10 February 2025 09:46:05 +0000 (0:00:01.363) 0:00:09.738 ******* 2025-02-10 09:47:58.947109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.947154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.947222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.947262 | orchestrator | 2025-02-10 09:47:58.947277 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-02-10 09:47:58.947291 | orchestrator | Monday 10 February 2025 09:46:06 +0000 (0:00:01.242) 0:00:10.980 ******* 2025-02-10 09:47:58.947305 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:58.947320 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.947334 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.947348 | orchestrator | 2025-02-10 09:47:58.947362 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-02-10 09:47:58.947376 | orchestrator | Monday 10 February 2025 09:46:07 +0000 (0:00:00.385) 0:00:11.365 ******* 2025-02-10 09:47:58.947390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-10 09:47:58.947404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-10 09:47:58.947418 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-10 09:47:58.947432 | orchestrator | 2025-02-10 09:47:58.947446 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-02-10 09:47:58.947460 | orchestrator | Monday 10 February 2025 09:46:08 +0000 (0:00:01.627) 0:00:12.993 ******* 2025-02-10 09:47:58.947474 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-10 09:47:58.947489 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-10 09:47:58.947509 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-10 09:47:58.947523 | orchestrator | 2025-02-10 09:47:58.947537 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-02-10 09:47:58.947551 | orchestrator | Monday 10 February 2025 09:46:10 +0000 (0:00:01.592) 0:00:14.586 ******* 2025-02-10 09:47:58.947565 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:47:58.947579 | orchestrator | 2025-02-10 09:47:58.947593 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-02-10 09:47:58.947607 | orchestrator | Monday 10 February 2025 09:46:10 +0000 (0:00:00.462) 0:00:15.049 ******* 2025-02-10 09:47:58.947621 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-02-10 09:47:58.947635 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-02-10 09:47:58.947648 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:58.947663 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:47:58.947676 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:47:58.947690 | orchestrator | 2025-02-10 09:47:58.947704 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-02-10 09:47:58.947717 | orchestrator | Monday 10 February 2025 09:46:11 +0000 (0:00:00.792) 0:00:15.842 ******* 2025-02-10 09:47:58.947731 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:58.947745 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.947759 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.947773 | orchestrator | 2025-02-10 09:47:58.947787 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-02-10 09:47:58.947801 | orchestrator | Monday 10 February 2025 09:46:12 +0000 (0:00:00.545) 0:00:16.387 ******* 2025-02-10 09:47:58.947815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1062474, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2065947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.947888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1062474, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2065947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.947908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1062474, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2065947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.947924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1062466, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2005947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.947939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1062466, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2005947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.947953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1062466, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2005947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.947968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1062459, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1975946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1062459, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1975946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1062459, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1975946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1062470, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1062470, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1062470, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1062453, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1925945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1062453, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1925945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1062453, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1925945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1062462, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1985946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1062462, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1985946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1062462, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1985946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1062469, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1062469, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1062469, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1062451, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1905944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1062451, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1905944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1062451, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1905944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1062440, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1825943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1062440, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1825943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1062440, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1825943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1062454, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1935945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1062454, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1935945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1062454, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1935945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1062444, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1855943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1062444, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1855943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1062444, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1855943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1062468, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2015946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1062468, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2015946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1062468, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2015946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1062457, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1955945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1062457, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1955945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1062457, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1955945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1062471, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1062471, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1062471, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2025948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1062449, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1895945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1062449, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1895945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1062449, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1895945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1062463, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1995945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1062463, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1995945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1062463, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1995945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1062442, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1835945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1062442, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1835945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1062442, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1835945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1062447, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1885943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.948998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1062447, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1885943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1062447, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1885943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1062458, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1965945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1062458, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1965945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1062458, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.1965945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1062501, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2425952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1062501, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2425952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1062501, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2425952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1062493, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.216595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1062493, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.216595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1062493, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.216595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1062478, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2085948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1062478, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2085948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1062478, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2085948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1062549, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2505953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1062549, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2505953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1062549, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2505953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1062480, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2085948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1062480, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2085948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1062480, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2085948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1062545, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2495954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1062545, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2495954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1062545, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2495954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1062555, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2525954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1062555, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2525954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1062555, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2525954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1062534, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2455952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1062534, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2455952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1062534, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2455952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1062543, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2485952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1062543, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2485952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1062543, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2485952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1062481, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2095947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1062481, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2095947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1062481, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2095947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1062497, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2175949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1062497, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2175949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1062497, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2175949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1062560, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2535954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1062560, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2535954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1062560, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2535954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1062548, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2495954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1062548, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2495954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1062548, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2495954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1062484, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2125947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1062484, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2125947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1062484, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2125947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1062482, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2105947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1062482, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2105947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1062482, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2105947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.949996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1062488, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2125947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1062488, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2125947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1062488, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2125947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1062489, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.216595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1062489, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.216595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1062489, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.216595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1062498, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2185948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1062498, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2185948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1062498, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2185948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1062537, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2475953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1062537, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2475953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1062537, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2475953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1062500, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2185948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1062500, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2185948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1062500, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2185948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1062566, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2555954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1062566, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2555954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1062566, 'dev': 216, 'nlink': 1, 'atime': 1739173273.0, 'mtime': 1739173273.0, 'ctime': 1739177102.2555954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-10 09:47:58.950450 | orchestrator | 2025-02-10 09:47:58.950462 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-02-10 09:47:58.950475 | orchestrator | Monday 10 February 2025 09:46:56 +0000 (0:00:44.070) 0:01:00.457 ******* 2025-02-10 09:47:58.950493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.950513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.950526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-10 09:47:58.950539 | orchestrator | 2025-02-10 09:47:58.950551 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-02-10 09:47:58.950564 | orchestrator | Monday 10 February 2025 09:46:57 +0000 (0:00:01.302) 0:01:01.760 ******* 2025-02-10 09:47:58.950576 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:58.950589 | orchestrator | 2025-02-10 09:47:58.950602 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-02-10 09:47:58.950614 | orchestrator | Monday 10 February 2025 09:46:59 +0000 (0:00:02.251) 0:01:04.011 ******* 2025-02-10 09:47:58.950627 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:58.950639 | orchestrator | 2025-02-10 09:47:58.950652 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-10 09:47:58.950664 | orchestrator | Monday 10 February 2025 09:47:02 +0000 (0:00:02.831) 0:01:06.842 ******* 2025-02-10 09:47:58.950677 | orchestrator | 2025-02-10 09:47:58.950689 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-10 09:47:58.950701 | orchestrator | Monday 10 February 2025 09:47:02 +0000 (0:00:00.125) 0:01:06.968 ******* 2025-02-10 09:47:58.950714 | orchestrator | 2025-02-10 09:47:58.950726 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-10 09:47:58.950739 | orchestrator | Monday 10 February 2025 09:47:03 +0000 (0:00:00.261) 0:01:07.229 ******* 2025-02-10 09:47:58.950751 | orchestrator | 2025-02-10 09:47:58.950763 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-02-10 09:47:58.950775 | orchestrator | Monday 10 February 2025 09:47:03 +0000 (0:00:00.103) 0:01:07.333 ******* 2025-02-10 09:47:58.950788 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.950800 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.950813 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:47:58.950825 | orchestrator | 2025-02-10 09:47:58.950837 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-02-10 09:47:58.950849 | orchestrator | Monday 10 February 2025 09:47:06 +0000 (0:00:03.227) 0:01:10.560 ******* 2025-02-10 09:47:58.950868 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.950880 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.950892 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-02-10 09:47:58.950905 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-02-10 09:47:58.950918 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:58.950930 | orchestrator | 2025-02-10 09:47:58.950942 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-02-10 09:47:58.950955 | orchestrator | Monday 10 February 2025 09:47:34 +0000 (0:00:27.830) 0:01:38.391 ******* 2025-02-10 09:47:58.950967 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:58.950979 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:47:58.950992 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:47:58.951004 | orchestrator | 2025-02-10 09:47:58.951016 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-02-10 09:47:58.951028 | orchestrator | Monday 10 February 2025 09:47:51 +0000 (0:00:17.091) 0:01:55.483 ******* 2025-02-10 09:47:58.951041 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:47:58.951053 | orchestrator | 2025-02-10 09:47:58.951065 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-02-10 09:47:58.951078 | orchestrator | Monday 10 February 2025 09:47:53 +0000 (0:00:02.303) 0:01:57.786 ******* 2025-02-10 09:47:58.951090 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:47:58.951102 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:47:58.951114 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:47:58.951127 | orchestrator | 2025-02-10 09:47:58.951139 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-02-10 09:47:58.951156 | orchestrator | Monday 10 February 2025 09:47:54 +0000 (0:00:00.495) 0:01:58.281 ******* 2025-02-10 09:48:01.984484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-02-10 09:48:01.984625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-02-10 09:48:01.984647 | orchestrator | 2025-02-10 09:48:01.984665 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-02-10 09:48:01.984682 | orchestrator | Monday 10 February 2025 09:47:56 +0000 (0:00:02.390) 0:02:00.672 ******* 2025-02-10 09:48:01.984696 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:48:01.984712 | orchestrator | 2025-02-10 09:48:01.984727 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:48:01.984742 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:48:01.984758 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:48:01.984772 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:48:01.984787 | orchestrator | 2025-02-10 09:48:01.984801 | orchestrator | 2025-02-10 09:48:01.984815 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:48:01.984829 | orchestrator | Monday 10 February 2025 09:47:56 +0000 (0:00:00.351) 0:02:01.024 ******* 2025-02-10 09:48:01.984843 | orchestrator | =============================================================================== 2025-02-10 09:48:01.984857 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 44.07s 2025-02-10 09:48:01.985040 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.83s 2025-02-10 09:48:01.985060 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 17.09s 2025-02-10 09:48:01.985091 | orchestrator | grafana : Restart first grafana container ------------------------------- 3.23s 2025-02-10 09:48:01.985106 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.83s 2025-02-10 09:48:01.985120 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-02-10 09:48:01.985134 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2025-02-10 09:48:01.985149 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.25s 2025-02-10 09:48:01.985163 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.63s 2025-02-10 09:48:01.985202 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.59s 2025-02-10 09:48:01.985217 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.53s 2025-02-10 09:48:01.985231 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2025-02-10 09:48:01.985246 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.32s 2025-02-10 09:48:01.985260 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.30s 2025-02-10 09:48:01.985274 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.24s 2025-02-10 09:48:01.985288 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.00s 2025-02-10 09:48:01.985302 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.83s 2025-02-10 09:48:01.985316 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.83s 2025-02-10 09:48:01.985330 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.79s 2025-02-10 09:48:01.985344 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.72s 2025-02-10 09:48:01.985359 | orchestrator | 2025-02-10 09:47:58 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:01.985374 | orchestrator | 2025-02-10 09:47:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:01.985407 | orchestrator | 2025-02-10 09:48:01 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:05.054499 | orchestrator | 2025-02-10 09:48:01 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:05.054603 | orchestrator | 2025-02-10 09:48:01 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:05.054627 | orchestrator | 2025-02-10 09:48:05 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:05.059068 | orchestrator | 2025-02-10 09:48:05 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:08.110372 | orchestrator | 2025-02-10 09:48:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:08.110644 | orchestrator | 2025-02-10 09:48:08 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:08.114607 | orchestrator | 2025-02-10 09:48:08 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:08.114650 | orchestrator | 2025-02-10 09:48:08 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:11.149797 | orchestrator | 2025-02-10 09:48:11 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:14.189914 | orchestrator | 2025-02-10 09:48:11 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:14.190272 | orchestrator | 2025-02-10 09:48:11 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:14.190358 | orchestrator | 2025-02-10 09:48:14 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:14.191627 | orchestrator | 2025-02-10 09:48:14 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:14.191683 | orchestrator | 2025-02-10 09:48:14 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:17.261273 | orchestrator | 2025-02-10 09:48:17 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:20.315046 | orchestrator | 2025-02-10 09:48:17 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:20.315229 | orchestrator | 2025-02-10 09:48:17 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:20.315266 | orchestrator | 2025-02-10 09:48:20 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:20.315867 | orchestrator | 2025-02-10 09:48:20 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:20.316650 | orchestrator | 2025-02-10 09:48:20 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:23.354751 | orchestrator | 2025-02-10 09:48:23 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:26.416447 | orchestrator | 2025-02-10 09:48:23 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:26.416626 | orchestrator | 2025-02-10 09:48:23 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:26.416685 | orchestrator | 2025-02-10 09:48:26 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:26.416971 | orchestrator | 2025-02-10 09:48:26 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:29.449200 | orchestrator | 2025-02-10 09:48:26 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:29.449362 | orchestrator | 2025-02-10 09:48:29 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:32.482808 | orchestrator | 2025-02-10 09:48:29 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:32.483030 | orchestrator | 2025-02-10 09:48:29 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:32.483381 | orchestrator | 2025-02-10 09:48:32 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:35.520633 | orchestrator | 2025-02-10 09:48:32 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:35.520831 | orchestrator | 2025-02-10 09:48:32 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:35.520889 | orchestrator | 2025-02-10 09:48:35 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:35.521569 | orchestrator | 2025-02-10 09:48:35 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:38.558724 | orchestrator | 2025-02-10 09:48:35 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:38.559037 | orchestrator | 2025-02-10 09:48:38 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:41.615704 | orchestrator | 2025-02-10 09:48:38 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:41.615848 | orchestrator | 2025-02-10 09:48:38 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:41.615888 | orchestrator | 2025-02-10 09:48:41 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:44.660156 | orchestrator | 2025-02-10 09:48:41 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:44.660377 | orchestrator | 2025-02-10 09:48:41 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:44.660415 | orchestrator | 2025-02-10 09:48:44 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:44.661715 | orchestrator | 2025-02-10 09:48:44 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:47.705543 | orchestrator | 2025-02-10 09:48:44 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:47.706541 | orchestrator | 2025-02-10 09:48:47 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:50.740943 | orchestrator | 2025-02-10 09:48:47 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:50.741046 | orchestrator | 2025-02-10 09:48:47 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:50.741075 | orchestrator | 2025-02-10 09:48:50 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:53.784622 | orchestrator | 2025-02-10 09:48:50 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:53.784759 | orchestrator | 2025-02-10 09:48:50 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:53.784795 | orchestrator | 2025-02-10 09:48:53 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:56.836866 | orchestrator | 2025-02-10 09:48:53 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:56.837009 | orchestrator | 2025-02-10 09:48:53 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:56.837049 | orchestrator | 2025-02-10 09:48:56 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:59.888923 | orchestrator | 2025-02-10 09:48:56 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:59.889074 | orchestrator | 2025-02-10 09:48:56 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:48:59.889115 | orchestrator | 2025-02-10 09:48:59 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:48:59.890549 | orchestrator | 2025-02-10 09:48:59 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:48:59.890597 | orchestrator | 2025-02-10 09:48:59 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:02.938910 | orchestrator | 2025-02-10 09:49:02 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:05.976749 | orchestrator | 2025-02-10 09:49:02 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:05.976896 | orchestrator | 2025-02-10 09:49:02 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:05.976936 | orchestrator | 2025-02-10 09:49:05 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:09.023054 | orchestrator | 2025-02-10 09:49:05 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:09.023236 | orchestrator | 2025-02-10 09:49:05 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:09.023282 | orchestrator | 2025-02-10 09:49:09 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:12.075379 | orchestrator | 2025-02-10 09:49:09 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:12.075501 | orchestrator | 2025-02-10 09:49:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:12.075618 | orchestrator | 2025-02-10 09:49:12 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:15.118356 | orchestrator | 2025-02-10 09:49:12 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:15.118501 | orchestrator | 2025-02-10 09:49:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:15.118540 | orchestrator | 2025-02-10 09:49:15 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:18.148829 | orchestrator | 2025-02-10 09:49:15 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:18.148974 | orchestrator | 2025-02-10 09:49:15 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:18.149014 | orchestrator | 2025-02-10 09:49:18 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:21.221561 | orchestrator | 2025-02-10 09:49:18 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:21.221710 | orchestrator | 2025-02-10 09:49:18 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:21.221748 | orchestrator | 2025-02-10 09:49:21 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:21.221910 | orchestrator | 2025-02-10 09:49:21 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:24.269799 | orchestrator | 2025-02-10 09:49:21 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:24.269961 | orchestrator | 2025-02-10 09:49:24 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:27.307873 | orchestrator | 2025-02-10 09:49:24 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:27.308181 | orchestrator | 2025-02-10 09:49:24 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:27.308259 | orchestrator | 2025-02-10 09:49:27 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:30.355042 | orchestrator | 2025-02-10 09:49:27 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:30.355169 | orchestrator | 2025-02-10 09:49:27 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:30.355237 | orchestrator | 2025-02-10 09:49:30 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:30.356530 | orchestrator | 2025-02-10 09:49:30 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:33.399665 | orchestrator | 2025-02-10 09:49:30 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:33.399831 | orchestrator | 2025-02-10 09:49:33 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:33.403335 | orchestrator | 2025-02-10 09:49:33 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:36.465323 | orchestrator | 2025-02-10 09:49:33 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:36.465459 | orchestrator | 2025-02-10 09:49:36 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:36.466817 | orchestrator | 2025-02-10 09:49:36 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:39.517177 | orchestrator | 2025-02-10 09:49:36 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:39.517544 | orchestrator | 2025-02-10 09:49:39 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:42.541924 | orchestrator | 2025-02-10 09:49:39 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:42.542095 | orchestrator | 2025-02-10 09:49:39 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:42.542167 | orchestrator | 2025-02-10 09:49:42 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:45.578323 | orchestrator | 2025-02-10 09:49:42 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:45.578505 | orchestrator | 2025-02-10 09:49:42 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:45.578533 | orchestrator | 2025-02-10 09:49:45 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:48.620819 | orchestrator | 2025-02-10 09:49:45 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:48.620941 | orchestrator | 2025-02-10 09:49:45 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:48.620971 | orchestrator | 2025-02-10 09:49:48 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:48.621254 | orchestrator | 2025-02-10 09:49:48 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:51.665033 | orchestrator | 2025-02-10 09:49:48 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:51.665287 | orchestrator | 2025-02-10 09:49:51 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:51.665720 | orchestrator | 2025-02-10 09:49:51 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:54.710821 | orchestrator | 2025-02-10 09:49:51 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:54.710954 | orchestrator | 2025-02-10 09:49:54 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:49:54.711251 | orchestrator | 2025-02-10 09:49:54 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:49:57.753323 | orchestrator | 2025-02-10 09:49:54 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:49:57.753721 | orchestrator | 2025-02-10 09:49:57 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:00.798836 | orchestrator | 2025-02-10 09:49:57 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:00.799001 | orchestrator | 2025-02-10 09:49:57 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:00.799031 | orchestrator | 2025-02-10 09:50:00 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:00.800678 | orchestrator | 2025-02-10 09:50:00 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:03.855419 | orchestrator | 2025-02-10 09:50:00 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:03.855619 | orchestrator | 2025-02-10 09:50:03 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:06.892303 | orchestrator | 2025-02-10 09:50:03 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:06.892611 | orchestrator | 2025-02-10 09:50:03 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:06.892661 | orchestrator | 2025-02-10 09:50:06 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:09.923218 | orchestrator | 2025-02-10 09:50:06 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:09.923370 | orchestrator | 2025-02-10 09:50:06 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:09.923413 | orchestrator | 2025-02-10 09:50:09 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:12.958778 | orchestrator | 2025-02-10 09:50:09 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:12.959005 | orchestrator | 2025-02-10 09:50:09 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:12.959100 | orchestrator | 2025-02-10 09:50:12 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:16.011257 | orchestrator | 2025-02-10 09:50:12 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:16.011402 | orchestrator | 2025-02-10 09:50:12 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:16.011441 | orchestrator | 2025-02-10 09:50:16 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:19.056534 | orchestrator | 2025-02-10 09:50:16 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:19.056685 | orchestrator | 2025-02-10 09:50:16 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:19.056727 | orchestrator | 2025-02-10 09:50:19 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:22.102178 | orchestrator | 2025-02-10 09:50:19 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:22.102315 | orchestrator | 2025-02-10 09:50:19 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:22.102347 | orchestrator | 2025-02-10 09:50:22 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:22.104357 | orchestrator | 2025-02-10 09:50:22 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:25.138228 | orchestrator | 2025-02-10 09:50:22 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:25.138410 | orchestrator | 2025-02-10 09:50:25 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:28.195645 | orchestrator | 2025-02-10 09:50:25 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:28.195768 | orchestrator | 2025-02-10 09:50:25 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:28.195797 | orchestrator | 2025-02-10 09:50:28 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:28.196418 | orchestrator | 2025-02-10 09:50:28 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:31.239092 | orchestrator | 2025-02-10 09:50:28 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:31.239268 | orchestrator | 2025-02-10 09:50:31 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:34.277738 | orchestrator | 2025-02-10 09:50:31 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:34.277889 | orchestrator | 2025-02-10 09:50:31 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:34.277959 | orchestrator | 2025-02-10 09:50:34 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:37.333676 | orchestrator | 2025-02-10 09:50:34 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:37.333958 | orchestrator | 2025-02-10 09:50:34 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:37.334010 | orchestrator | 2025-02-10 09:50:37 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:40.370672 | orchestrator | 2025-02-10 09:50:37 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:40.370801 | orchestrator | 2025-02-10 09:50:37 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:40.370831 | orchestrator | 2025-02-10 09:50:40 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:43.433181 | orchestrator | 2025-02-10 09:50:40 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:43.433440 | orchestrator | 2025-02-10 09:50:40 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:43.433486 | orchestrator | 2025-02-10 09:50:43 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:46.470764 | orchestrator | 2025-02-10 09:50:43 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:46.471019 | orchestrator | 2025-02-10 09:50:43 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:46.471065 | orchestrator | 2025-02-10 09:50:46 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:49.518765 | orchestrator | 2025-02-10 09:50:46 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:49.518935 | orchestrator | 2025-02-10 09:50:46 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:49.518976 | orchestrator | 2025-02-10 09:50:49 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:49.520048 | orchestrator | 2025-02-10 09:50:49 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:49.520189 | orchestrator | 2025-02-10 09:50:49 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:52.563424 | orchestrator | 2025-02-10 09:50:52 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:55.595187 | orchestrator | 2025-02-10 09:50:52 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:55.596251 | orchestrator | 2025-02-10 09:50:52 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:55.596332 | orchestrator | 2025-02-10 09:50:55 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:58.633360 | orchestrator | 2025-02-10 09:50:55 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state STARTED 2025-02-10 09:50:58.633501 | orchestrator | 2025-02-10 09:50:55 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:50:58.633552 | orchestrator | 2025-02-10 09:50:58 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state STARTED 2025-02-10 09:50:58.637549 | orchestrator | 2025-02-10 09:50:58 | INFO  | Task 0e632de1-53c1-435b-92ca-63329bf84711 is in state SUCCESS 2025-02-10 09:50:58.639118 | orchestrator | 2025-02-10 09:50:58.639160 | orchestrator | 2025-02-10 09:50:58.639176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:50:58.639191 | orchestrator | 2025-02-10 09:50:58.639205 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-02-10 09:50:58.639220 | orchestrator | Monday 10 February 2025 09:41:19 +0000 (0:00:00.281) 0:00:00.281 ******* 2025-02-10 09:50:58.639235 | orchestrator | changed: [testbed-manager] 2025-02-10 09:50:58.639251 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.639266 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.639280 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.639295 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.639309 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.639323 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.639338 | orchestrator | 2025-02-10 09:50:58.639352 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:50:58.639366 | orchestrator | Monday 10 February 2025 09:41:19 +0000 (0:00:00.816) 0:00:01.097 ******* 2025-02-10 09:50:58.639776 | orchestrator | changed: [testbed-manager] 2025-02-10 09:50:58.639802 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.639817 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.639832 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.639877 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.639893 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.639908 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.639923 | orchestrator | 2025-02-10 09:50:58.639939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:50:58.639954 | orchestrator | Monday 10 February 2025 09:41:20 +0000 (0:00:00.888) 0:00:01.986 ******* 2025-02-10 09:50:58.639971 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-02-10 09:50:58.639987 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-02-10 09:50:58.640002 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-02-10 09:50:58.640018 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-02-10 09:50:58.640033 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-02-10 09:50:58.640048 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-02-10 09:50:58.640063 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-02-10 09:50:58.640078 | orchestrator | 2025-02-10 09:50:58.640117 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-02-10 09:50:58.640131 | orchestrator | 2025-02-10 09:50:58.640145 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-02-10 09:50:58.640160 | orchestrator | Monday 10 February 2025 09:41:21 +0000 (0:00:00.889) 0:00:02.876 ******* 2025-02-10 09:50:58.640174 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.640188 | orchestrator | 2025-02-10 09:50:58.640202 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-02-10 09:50:58.640216 | orchestrator | Monday 10 February 2025 09:41:22 +0000 (0:00:00.611) 0:00:03.488 ******* 2025-02-10 09:50:58.640231 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-02-10 09:50:58.640246 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-02-10 09:50:58.640260 | orchestrator | 2025-02-10 09:50:58.640274 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-02-10 09:50:58.640289 | orchestrator | Monday 10 February 2025 09:41:26 +0000 (0:00:04.431) 0:00:07.920 ******* 2025-02-10 09:50:58.640303 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:50:58.640317 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-10 09:50:58.640782 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.640807 | orchestrator | 2025-02-10 09:50:58.640823 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-02-10 09:50:58.640854 | orchestrator | Monday 10 February 2025 09:41:31 +0000 (0:00:04.705) 0:00:12.625 ******* 2025-02-10 09:50:58.640871 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.640886 | orchestrator | 2025-02-10 09:50:58.640902 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-02-10 09:50:58.640917 | orchestrator | Monday 10 February 2025 09:41:31 +0000 (0:00:00.512) 0:00:13.137 ******* 2025-02-10 09:50:58.640933 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.640948 | orchestrator | 2025-02-10 09:50:58.640964 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-02-10 09:50:58.640979 | orchestrator | Monday 10 February 2025 09:41:33 +0000 (0:00:01.357) 0:00:14.495 ******* 2025-02-10 09:50:58.640994 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.641010 | orchestrator | 2025-02-10 09:50:58.641025 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:50:58.641041 | orchestrator | Monday 10 February 2025 09:41:40 +0000 (0:00:06.766) 0:00:21.262 ******* 2025-02-10 09:50:58.641057 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.641072 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.641121 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.641138 | orchestrator | 2025-02-10 09:50:58.641154 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-02-10 09:50:58.641170 | orchestrator | Monday 10 February 2025 09:41:40 +0000 (0:00:00.578) 0:00:21.840 ******* 2025-02-10 09:50:58.641199 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.641215 | orchestrator | 2025-02-10 09:50:58.641231 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-02-10 09:50:58.641246 | orchestrator | Monday 10 February 2025 09:42:08 +0000 (0:00:27.506) 0:00:49.346 ******* 2025-02-10 09:50:58.641262 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.641278 | orchestrator | 2025-02-10 09:50:58.641294 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-10 09:50:58.641310 | orchestrator | Monday 10 February 2025 09:42:18 +0000 (0:00:10.306) 0:00:59.653 ******* 2025-02-10 09:50:58.641326 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.641351 | orchestrator | 2025-02-10 09:50:58.641937 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-10 09:50:58.641959 | orchestrator | Monday 10 February 2025 09:42:29 +0000 (0:00:10.735) 0:01:10.389 ******* 2025-02-10 09:50:58.642007 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.642144 | orchestrator | 2025-02-10 09:50:58.642163 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-02-10 09:50:58.642177 | orchestrator | Monday 10 February 2025 09:42:34 +0000 (0:00:05.737) 0:01:16.126 ******* 2025-02-10 09:50:58.642192 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.642206 | orchestrator | 2025-02-10 09:50:58.642220 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:50:58.642234 | orchestrator | Monday 10 February 2025 09:42:37 +0000 (0:00:02.706) 0:01:18.833 ******* 2025-02-10 09:50:58.642248 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.642341 | orchestrator | 2025-02-10 09:50:58.642744 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-02-10 09:50:58.642761 | orchestrator | Monday 10 February 2025 09:42:40 +0000 (0:00:03.184) 0:01:22.017 ******* 2025-02-10 09:50:58.642775 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.642788 | orchestrator | 2025-02-10 09:50:58.642801 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-02-10 09:50:58.642815 | orchestrator | Monday 10 February 2025 09:42:57 +0000 (0:00:16.745) 0:01:38.762 ******* 2025-02-10 09:50:58.642828 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.642842 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.642855 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.642869 | orchestrator | 2025-02-10 09:50:58.642882 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-02-10 09:50:58.642895 | orchestrator | 2025-02-10 09:50:58.642909 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-02-10 09:50:58.642922 | orchestrator | Monday 10 February 2025 09:42:58 +0000 (0:00:00.881) 0:01:39.644 ******* 2025-02-10 09:50:58.642936 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.642949 | orchestrator | 2025-02-10 09:50:58.643000 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-02-10 09:50:58.643014 | orchestrator | Monday 10 February 2025 09:43:00 +0000 (0:00:02.258) 0:01:41.902 ******* 2025-02-10 09:50:58.643028 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.643041 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.643138 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.643159 | orchestrator | 2025-02-10 09:50:58.643173 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-02-10 09:50:58.643186 | orchestrator | Monday 10 February 2025 09:43:03 +0000 (0:00:03.270) 0:01:45.173 ******* 2025-02-10 09:50:58.643199 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.643212 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.643226 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.643239 | orchestrator | 2025-02-10 09:50:58.643252 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-02-10 09:50:58.643819 | orchestrator | Monday 10 February 2025 09:43:06 +0000 (0:00:02.375) 0:01:47.548 ******* 2025-02-10 09:50:58.643867 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.643881 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.643895 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.643908 | orchestrator | 2025-02-10 09:50:58.643922 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-02-10 09:50:58.643936 | orchestrator | Monday 10 February 2025 09:43:07 +0000 (0:00:01.318) 0:01:48.867 ******* 2025-02-10 09:50:58.643949 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:50:58.643963 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.643977 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:50:58.643991 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.644004 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-10 09:50:58.644174 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-02-10 09:50:58.644188 | orchestrator | 2025-02-10 09:50:58.644201 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-02-10 09:50:58.644215 | orchestrator | Monday 10 February 2025 09:43:17 +0000 (0:00:09.866) 0:01:58.734 ******* 2025-02-10 09:50:58.644228 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.644242 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.644255 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.644268 | orchestrator | 2025-02-10 09:50:58.644282 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-02-10 09:50:58.644295 | orchestrator | Monday 10 February 2025 09:43:18 +0000 (0:00:00.877) 0:01:59.611 ******* 2025-02-10 09:50:58.644309 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-10 09:50:58.644322 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.644336 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-10 09:50:58.644349 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.644362 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-10 09:50:58.644376 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.644389 | orchestrator | 2025-02-10 09:50:58.644402 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-02-10 09:50:58.644762 | orchestrator | Monday 10 February 2025 09:43:21 +0000 (0:00:03.158) 0:02:02.770 ******* 2025-02-10 09:50:58.644777 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.644791 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.644803 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.644816 | orchestrator | 2025-02-10 09:50:58.644828 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-02-10 09:50:58.644841 | orchestrator | Monday 10 February 2025 09:43:22 +0000 (0:00:01.163) 0:02:03.934 ******* 2025-02-10 09:50:58.644853 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.644865 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.644878 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.644890 | orchestrator | 2025-02-10 09:50:58.644903 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-02-10 09:50:58.644915 | orchestrator | Monday 10 February 2025 09:43:24 +0000 (0:00:01.513) 0:02:05.447 ******* 2025-02-10 09:50:58.644928 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.644940 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645032 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.645051 | orchestrator | 2025-02-10 09:50:58.645064 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-02-10 09:50:58.645076 | orchestrator | Monday 10 February 2025 09:43:28 +0000 (0:00:04.167) 0:02:09.615 ******* 2025-02-10 09:50:58.645145 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.645158 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645171 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.645183 | orchestrator | 2025-02-10 09:50:58.645196 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-10 09:50:58.645221 | orchestrator | Monday 10 February 2025 09:43:52 +0000 (0:00:24.369) 0:02:33.985 ******* 2025-02-10 09:50:58.645233 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.645246 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645258 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.645270 | orchestrator | 2025-02-10 09:50:58.645283 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-10 09:50:58.645295 | orchestrator | Monday 10 February 2025 09:44:10 +0000 (0:00:18.086) 0:02:52.071 ******* 2025-02-10 09:50:58.645308 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.645320 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645333 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.645345 | orchestrator | 2025-02-10 09:50:58.645357 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-02-10 09:50:58.645369 | orchestrator | Monday 10 February 2025 09:44:13 +0000 (0:00:02.219) 0:02:54.290 ******* 2025-02-10 09:50:58.645382 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.645394 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645406 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.645418 | orchestrator | 2025-02-10 09:50:58.645431 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-02-10 09:50:58.645444 | orchestrator | Monday 10 February 2025 09:44:24 +0000 (0:00:11.084) 0:03:05.375 ******* 2025-02-10 09:50:58.645456 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.645469 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645481 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.645493 | orchestrator | 2025-02-10 09:50:58.645505 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-02-10 09:50:58.645518 | orchestrator | Monday 10 February 2025 09:44:28 +0000 (0:00:04.668) 0:03:10.043 ******* 2025-02-10 09:50:58.645530 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.645542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.645561 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.645571 | orchestrator | 2025-02-10 09:50:58.645581 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-02-10 09:50:58.645591 | orchestrator | 2025-02-10 09:50:58.645601 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:50:58.645611 | orchestrator | Monday 10 February 2025 09:44:29 +0000 (0:00:01.067) 0:03:11.111 ******* 2025-02-10 09:50:58.645621 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.645633 | orchestrator | 2025-02-10 09:50:58.645643 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-02-10 09:50:58.645654 | orchestrator | Monday 10 February 2025 09:44:30 +0000 (0:00:01.045) 0:03:12.156 ******* 2025-02-10 09:50:58.645666 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-02-10 09:50:58.645677 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-02-10 09:50:58.645688 | orchestrator | 2025-02-10 09:50:58.645700 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-02-10 09:50:58.645733 | orchestrator | Monday 10 February 2025 09:44:34 +0000 (0:00:03.488) 0:03:15.644 ******* 2025-02-10 09:50:58.645745 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-02-10 09:50:58.645757 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-02-10 09:50:58.645769 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-02-10 09:50:58.645781 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-02-10 09:50:58.645792 | orchestrator | 2025-02-10 09:50:58.645804 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-02-10 09:50:58.645815 | orchestrator | Monday 10 February 2025 09:44:42 +0000 (0:00:07.774) 0:03:23.419 ******* 2025-02-10 09:50:58.645832 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:50:58.645844 | orchestrator | 2025-02-10 09:50:58.645855 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-02-10 09:50:58.645866 | orchestrator | Monday 10 February 2025 09:44:46 +0000 (0:00:03.831) 0:03:27.250 ******* 2025-02-10 09:50:58.645878 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:50:58.645889 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-02-10 09:50:58.645899 | orchestrator | 2025-02-10 09:50:58.645911 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-02-10 09:50:58.645923 | orchestrator | Monday 10 February 2025 09:44:49 +0000 (0:00:03.887) 0:03:31.138 ******* 2025-02-10 09:50:58.645934 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:50:58.645945 | orchestrator | 2025-02-10 09:50:58.645956 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-02-10 09:50:58.645967 | orchestrator | Monday 10 February 2025 09:44:52 +0000 (0:00:02.814) 0:03:33.952 ******* 2025-02-10 09:50:58.645979 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-02-10 09:50:58.646315 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-02-10 09:50:58.646334 | orchestrator | 2025-02-10 09:50:58.646345 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-02-10 09:50:58.646441 | orchestrator | Monday 10 February 2025 09:45:00 +0000 (0:00:07.320) 0:03:41.273 ******* 2025-02-10 09:50:58.646458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.646474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.646494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.646563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.646581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.646592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.646603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.646614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.646631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.646642 | orchestrator | 2025-02-10 09:50:58.646653 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-02-10 09:50:58.646663 | orchestrator | Monday 10 February 2025 09:45:01 +0000 (0:00:01.538) 0:03:42.811 ******* 2025-02-10 09:50:58.646673 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.646683 | orchestrator | 2025-02-10 09:50:58.646693 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-02-10 09:50:58.646703 | orchestrator | Monday 10 February 2025 09:45:01 +0000 (0:00:00.112) 0:03:42.924 ******* 2025-02-10 09:50:58.646713 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.646724 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.646734 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.646744 | orchestrator | 2025-02-10 09:50:58.646754 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-02-10 09:50:58.646764 | orchestrator | Monday 10 February 2025 09:45:02 +0000 (0:00:00.372) 0:03:43.296 ******* 2025-02-10 09:50:58.646774 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-10 09:50:58.646785 | orchestrator | 2025-02-10 09:50:58.646795 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-02-10 09:50:58.646858 | orchestrator | Monday 10 February 2025 09:45:02 +0000 (0:00:00.452) 0:03:43.749 ******* 2025-02-10 09:50:58.646873 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.646883 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.646893 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.646903 | orchestrator | 2025-02-10 09:50:58.646913 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-10 09:50:58.646924 | orchestrator | Monday 10 February 2025 09:45:03 +0000 (0:00:00.561) 0:03:44.310 ******* 2025-02-10 09:50:58.646934 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.646944 | orchestrator | 2025-02-10 09:50:58.646954 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-02-10 09:50:58.646965 | orchestrator | Monday 10 February 2025 09:45:03 +0000 (0:00:00.775) 0:03:45.086 ******* 2025-02-10 09:50:58.646976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.646993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.647059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.647075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.647162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.647182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.647193 | orchestrator | 2025-02-10 09:50:58.647203 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-02-10 09:50:58.647214 | orchestrator | Monday 10 February 2025 09:45:06 +0000 (0:00:02.705) 0:03:47.792 ******* 2025-02-10 09:50:58.647225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.647236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647247 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.647319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.647343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647354 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.647366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.647378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647403 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.647414 | orchestrator | 2025-02-10 09:50:58.647424 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-02-10 09:50:58.647435 | orchestrator | Monday 10 February 2025 09:45:07 +0000 (0:00:00.853) 0:03:48.646 ******* 2025-02-10 09:50:58.647502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.647526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647537 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.647548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.647560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647569 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.647626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.647639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647657 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.647666 | orchestrator | 2025-02-10 09:50:58.647674 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-02-10 09:50:58.647683 | orchestrator | Monday 10 February 2025 09:45:08 +0000 (0:00:01.232) 0:03:49.878 ******* 2025-02-10 09:50:58.647692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.647702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.647761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.647780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.647790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.647808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.647874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.647893 | orchestrator | 2025-02-10 09:50:58.647914 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-02-10 09:50:58.647924 | orchestrator | Monday 10 February 2025 09:45:11 +0000 (0:00:03.164) 0:03:53.043 ******* 2025-02-10 09:50:58.647933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.647943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.648001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.648020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.648030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.648048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.648078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648107 | orchestrator | 2025-02-10 09:50:58.648116 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-02-10 09:50:58.648130 | orchestrator | Monday 10 February 2025 09:45:22 +0000 (0:00:10.420) 0:04:03.464 ******* 2025-02-10 09:50:58.648195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.648209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648228 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.648237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.648306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-10 09:50:58.648353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648362 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.648371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648389 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.648398 | orchestrator | 2025-02-10 09:50:58.648406 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-02-10 09:50:58.648415 | orchestrator | Monday 10 February 2025 09:45:23 +0000 (0:00:01.335) 0:04:04.800 ******* 2025-02-10 09:50:58.648424 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.648433 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.648441 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.648455 | orchestrator | 2025-02-10 09:50:58.648464 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-02-10 09:50:58.648472 | orchestrator | Monday 10 February 2025 09:45:26 +0000 (0:00:03.031) 0:04:07.831 ******* 2025-02-10 09:50:58.648481 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.648489 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.648498 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.648506 | orchestrator | 2025-02-10 09:50:58.648515 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-02-10 09:50:58.648524 | orchestrator | Monday 10 February 2025 09:45:27 +0000 (0:00:00.739) 0:04:08.570 ******* 2025-02-10 09:50:58.648583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.648597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.648638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-10 09:50:58.648706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.648719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.648737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.648756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.648770 | orchestrator | 2025-02-10 09:50:58.648779 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-10 09:50:58.648788 | orchestrator | Monday 10 February 2025 09:45:30 +0000 (0:00:03.038) 0:04:11.609 ******* 2025-02-10 09:50:58.648796 | orchestrator | 2025-02-10 09:50:58.648805 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-10 09:50:58.648813 | orchestrator | Monday 10 February 2025 09:45:30 +0000 (0:00:00.357) 0:04:11.966 ******* 2025-02-10 09:50:58.648822 | orchestrator | 2025-02-10 09:50:58.648830 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-10 09:50:58.648839 | orchestrator | Monday 10 February 2025 09:45:30 +0000 (0:00:00.134) 0:04:12.101 ******* 2025-02-10 09:50:58.648848 | orchestrator | 2025-02-10 09:50:58.648856 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-02-10 09:50:58.648865 | orchestrator | Monday 10 February 2025 09:45:31 +0000 (0:00:00.311) 0:04:12.413 ******* 2025-02-10 09:50:58.648873 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.648882 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.648890 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.648899 | orchestrator | 2025-02-10 09:50:58.648907 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-02-10 09:50:58.648916 | orchestrator | Monday 10 February 2025 09:45:43 +0000 (0:00:12.484) 0:04:24.898 ******* 2025-02-10 09:50:58.648924 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.648933 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.648942 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.648950 | orchestrator | 2025-02-10 09:50:58.649005 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-02-10 09:50:58.649017 | orchestrator | 2025-02-10 09:50:58.649026 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:50:58.649034 | orchestrator | Monday 10 February 2025 09:45:49 +0000 (0:00:05.794) 0:04:30.693 ******* 2025-02-10 09:50:58.649043 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.649052 | orchestrator | 2025-02-10 09:50:58.649072 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:50:58.649097 | orchestrator | Monday 10 February 2025 09:45:51 +0000 (0:00:01.701) 0:04:32.394 ******* 2025-02-10 09:50:58.649106 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.649115 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.649124 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.649132 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.649141 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.649149 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.649158 | orchestrator | 2025-02-10 09:50:58.649167 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-02-10 09:50:58.649175 | orchestrator | Monday 10 February 2025 09:45:52 +0000 (0:00:00.866) 0:04:33.260 ******* 2025-02-10 09:50:58.649184 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.649192 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.649201 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.649209 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:50:58.649218 | orchestrator | 2025-02-10 09:50:58.649226 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-10 09:50:58.649235 | orchestrator | Monday 10 February 2025 09:45:53 +0000 (0:00:01.441) 0:04:34.702 ******* 2025-02-10 09:50:58.649244 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-02-10 09:50:58.649252 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-02-10 09:50:58.649261 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-02-10 09:50:58.649269 | orchestrator | 2025-02-10 09:50:58.649284 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-10 09:50:58.649292 | orchestrator | Monday 10 February 2025 09:45:54 +0000 (0:00:00.726) 0:04:35.428 ******* 2025-02-10 09:50:58.649301 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-02-10 09:50:58.649309 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-02-10 09:50:58.649318 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-02-10 09:50:58.649326 | orchestrator | 2025-02-10 09:50:58.649335 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-10 09:50:58.649344 | orchestrator | Monday 10 February 2025 09:45:55 +0000 (0:00:01.528) 0:04:36.957 ******* 2025-02-10 09:50:58.649352 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-02-10 09:50:58.649361 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.649369 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-02-10 09:50:58.649378 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.649386 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-02-10 09:50:58.649395 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.649404 | orchestrator | 2025-02-10 09:50:58.649412 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-02-10 09:50:58.649425 | orchestrator | Monday 10 February 2025 09:45:56 +0000 (0:00:01.101) 0:04:38.059 ******* 2025-02-10 09:50:58.649434 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:50:58.649443 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:50:58.649451 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-10 09:50:58.649460 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-10 09:50:58.649469 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.649477 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:50:58.649486 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:50:58.649495 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.649503 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-10 09:50:58.649512 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-10 09:50:58.649520 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.649529 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-10 09:50:58.649538 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-10 09:50:58.649546 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-10 09:50:58.649555 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-10 09:50:58.649563 | orchestrator | 2025-02-10 09:50:58.649572 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-02-10 09:50:58.649580 | orchestrator | Monday 10 February 2025 09:45:59 +0000 (0:00:02.309) 0:04:40.369 ******* 2025-02-10 09:50:58.649589 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.649597 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.649606 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.649615 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.649623 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.649632 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.649645 | orchestrator | 2025-02-10 09:50:58.649655 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-02-10 09:50:58.649665 | orchestrator | Monday 10 February 2025 09:46:00 +0000 (0:00:01.603) 0:04:41.973 ******* 2025-02-10 09:50:58.649725 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.649739 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.649749 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.649763 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.649773 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.649781 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.649790 | orchestrator | 2025-02-10 09:50:58.649802 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-02-10 09:50:58.649811 | orchestrator | Monday 10 February 2025 09:46:02 +0000 (0:00:01.978) 0:04:43.951 ******* 2025-02-10 09:50:58.649821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.649831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.649840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.649850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.649859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.649921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.649935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.649945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.649954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.649963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.649972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.649981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.650152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.650162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.650171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.650249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.650274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.650283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.650291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.650347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.650368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.650489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.650497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.650510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.650519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.650606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.650615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.650729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650755 | orchestrator | 2025-02-10 09:50:58.650769 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:50:58.650777 | orchestrator | Monday 10 February 2025 09:46:05 +0000 (0:00:03.048) 0:04:46.999 ******* 2025-02-10 09:50:58.650785 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:50:58.650794 | orchestrator | 2025-02-10 09:50:58.650802 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-02-10 09:50:58.650809 | orchestrator | Monday 10 February 2025 09:46:07 +0000 (0:00:01.314) 0:04:48.314 ******* 2025-02-10 09:50:58.650818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.650998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651015 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.651161 | orchestrator | 2025-02-10 09:50:58.651169 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-02-10 09:50:58.651178 | orchestrator | Monday 10 February 2025 09:46:11 +0000 (0:00:04.624) 0:04:52.938 ******* 2025-02-10 09:50:58.651186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.651250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.651263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651272 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.651280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.651295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.651303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651311 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.651369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.651382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.651390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651403 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.651412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.651420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651437 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.651497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.651510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651533 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.651541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.651549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651566 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.651574 | orchestrator | 2025-02-10 09:50:58.651582 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-02-10 09:50:58.651590 | orchestrator | Monday 10 February 2025 09:46:13 +0000 (0:00:01.930) 0:04:54.869 ******* 2025-02-10 09:50:58.651642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.651654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.651663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.651700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.651708 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.651717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651725 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.651753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.651768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.651778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651786 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.651795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.651803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651820 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.651846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.651861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651878 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.651886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.651895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.651912 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.651920 | orchestrator | 2025-02-10 09:50:58.651928 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:50:58.651936 | orchestrator | Monday 10 February 2025 09:46:17 +0000 (0:00:03.462) 0:04:58.331 ******* 2025-02-10 09:50:58.651944 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.651952 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.651960 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.651968 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-10 09:50:58.651976 | orchestrator | 2025-02-10 09:50:58.652010 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-02-10 09:50:58.652020 | orchestrator | Monday 10 February 2025 09:46:18 +0000 (0:00:01.422) 0:04:59.754 ******* 2025-02-10 09:50:58.652033 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:50:58.652041 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:50:58.652049 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:50:58.652057 | orchestrator | 2025-02-10 09:50:58.652065 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-02-10 09:50:58.652073 | orchestrator | Monday 10 February 2025 09:46:19 +0000 (0:00:00.981) 0:05:00.736 ******* 2025-02-10 09:50:58.652123 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:50:58.652132 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-10 09:50:58.652141 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-10 09:50:58.652149 | orchestrator | 2025-02-10 09:50:58.652157 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-02-10 09:50:58.652165 | orchestrator | Monday 10 February 2025 09:46:20 +0000 (0:00:00.961) 0:05:01.697 ******* 2025-02-10 09:50:58.652173 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:50:58.652181 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:50:58.652188 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:50:58.652196 | orchestrator | 2025-02-10 09:50:58.652204 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-02-10 09:50:58.652212 | orchestrator | Monday 10 February 2025 09:46:21 +0000 (0:00:01.028) 0:05:02.725 ******* 2025-02-10 09:50:58.652219 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:50:58.652226 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:50:58.652233 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:50:58.652240 | orchestrator | 2025-02-10 09:50:58.652247 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-02-10 09:50:58.652254 | orchestrator | Monday 10 February 2025 09:46:21 +0000 (0:00:00.437) 0:05:03.162 ******* 2025-02-10 09:50:58.652261 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-02-10 09:50:58.652268 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-02-10 09:50:58.652275 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-02-10 09:50:58.652283 | orchestrator | 2025-02-10 09:50:58.652291 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-02-10 09:50:58.652299 | orchestrator | Monday 10 February 2025 09:46:23 +0000 (0:00:01.498) 0:05:04.660 ******* 2025-02-10 09:50:58.652306 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-02-10 09:50:58.652314 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-02-10 09:50:58.652322 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-02-10 09:50:58.652330 | orchestrator | 2025-02-10 09:50:58.652337 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-02-10 09:50:58.652345 | orchestrator | Monday 10 February 2025 09:46:25 +0000 (0:00:01.565) 0:05:06.226 ******* 2025-02-10 09:50:58.652352 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-02-10 09:50:58.652360 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-02-10 09:50:58.652368 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-02-10 09:50:58.652376 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-02-10 09:50:58.652383 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-02-10 09:50:58.652391 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-02-10 09:50:58.652398 | orchestrator | 2025-02-10 09:50:58.652406 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-02-10 09:50:58.652413 | orchestrator | Monday 10 February 2025 09:46:32 +0000 (0:00:07.145) 0:05:13.371 ******* 2025-02-10 09:50:58.652421 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.652429 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.652437 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.652444 | orchestrator | 2025-02-10 09:50:58.652452 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-02-10 09:50:58.652460 | orchestrator | Monday 10 February 2025 09:46:32 +0000 (0:00:00.336) 0:05:13.708 ******* 2025-02-10 09:50:58.652472 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.652480 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.652487 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.652495 | orchestrator | 2025-02-10 09:50:58.652503 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-02-10 09:50:58.652511 | orchestrator | Monday 10 February 2025 09:46:33 +0000 (0:00:00.590) 0:05:14.299 ******* 2025-02-10 09:50:58.652523 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.652531 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.652539 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.652547 | orchestrator | 2025-02-10 09:50:58.652554 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-02-10 09:50:58.652562 | orchestrator | Monday 10 February 2025 09:46:34 +0000 (0:00:01.874) 0:05:16.174 ******* 2025-02-10 09:50:58.652570 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-02-10 09:50:58.652578 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-02-10 09:50:58.652586 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-02-10 09:50:58.652594 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-02-10 09:50:58.652602 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-02-10 09:50:58.652632 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-02-10 09:50:58.652640 | orchestrator | 2025-02-10 09:50:58.652647 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-02-10 09:50:58.652655 | orchestrator | Monday 10 February 2025 09:46:39 +0000 (0:00:04.188) 0:05:20.362 ******* 2025-02-10 09:50:58.652662 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:50:58.652669 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:50:58.652676 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:50:58.652683 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-10 09:50:58.652689 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.652696 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-10 09:50:58.652703 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.652710 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-10 09:50:58.652717 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.652724 | orchestrator | 2025-02-10 09:50:58.652731 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-02-10 09:50:58.652738 | orchestrator | Monday 10 February 2025 09:46:42 +0000 (0:00:03.107) 0:05:23.470 ******* 2025-02-10 09:50:58.652745 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.652752 | orchestrator | 2025-02-10 09:50:58.652759 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-02-10 09:50:58.652766 | orchestrator | Monday 10 February 2025 09:46:42 +0000 (0:00:00.151) 0:05:23.623 ******* 2025-02-10 09:50:58.652773 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.652780 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.652787 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.652794 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.652801 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.652811 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.652818 | orchestrator | 2025-02-10 09:50:58.652825 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-02-10 09:50:58.652832 | orchestrator | Monday 10 February 2025 09:46:43 +0000 (0:00:00.883) 0:05:24.506 ******* 2025-02-10 09:50:58.652844 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-10 09:50:58.652851 | orchestrator | 2025-02-10 09:50:58.652858 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-02-10 09:50:58.652865 | orchestrator | Monday 10 February 2025 09:46:43 +0000 (0:00:00.469) 0:05:24.976 ******* 2025-02-10 09:50:58.652872 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.652879 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.652886 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.652892 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.652899 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.652906 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.652913 | orchestrator | 2025-02-10 09:50:58.652920 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-02-10 09:50:58.652927 | orchestrator | Monday 10 February 2025 09:46:45 +0000 (0:00:01.210) 0:05:26.187 ******* 2025-02-10 09:50:58.652934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.652942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.652970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.652979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.652991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.652999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653489 | orchestrator | 2025-02-10 09:50:58.653496 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-02-10 09:50:58.653503 | orchestrator | Monday 10 February 2025 09:46:50 +0000 (0:00:05.355) 0:05:31.542 ******* 2025-02-10 09:50:58.653510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.653749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.653771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.653915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.653922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.653986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.653994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.654001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.654012 | orchestrator | 2025-02-10 09:50:58.654041 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-02-10 09:50:58.654048 | orchestrator | Monday 10 February 2025 09:47:02 +0000 (0:00:11.757) 0:05:43.300 ******* 2025-02-10 09:50:58.654056 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.654063 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.654070 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.654077 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654123 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654130 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654137 | orchestrator | 2025-02-10 09:50:58.654144 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-02-10 09:50:58.654151 | orchestrator | Monday 10 February 2025 09:47:04 +0000 (0:00:02.830) 0:05:46.131 ******* 2025-02-10 09:50:58.654158 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-10 09:50:58.654166 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-10 09:50:58.654173 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-10 09:50:58.654180 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-02-10 09:50:58.654187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-10 09:50:58.654194 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654201 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-10 09:50:58.654208 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654216 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-10 09:50:58.654223 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654230 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-02-10 09:50:58.654237 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-02-10 09:50:58.654244 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-02-10 09:50:58.654272 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-02-10 09:50:58.654284 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-02-10 09:50:58.654292 | orchestrator | 2025-02-10 09:50:58.654299 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-02-10 09:50:58.654306 | orchestrator | Monday 10 February 2025 09:47:12 +0000 (0:00:07.258) 0:05:53.389 ******* 2025-02-10 09:50:58.654312 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.654319 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.654326 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.654333 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654340 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654346 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654352 | orchestrator | 2025-02-10 09:50:58.654358 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-02-10 09:50:58.654364 | orchestrator | Monday 10 February 2025 09:47:13 +0000 (0:00:01.012) 0:05:54.402 ******* 2025-02-10 09:50:58.654370 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-10 09:50:58.654377 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-10 09:50:58.654387 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-10 09:50:58.654399 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-10 09:50:58.654405 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-10 09:50:58.654411 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-02-10 09:50:58.654417 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-10 09:50:58.654424 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-02-10 09:50:58.654430 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-10 09:50:58.654436 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654442 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-10 09:50:58.654448 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654454 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-10 09:50:58.654460 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654467 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-02-10 09:50:58.654473 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:50:58.654479 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:50:58.654485 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:50:58.654491 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:50:58.654497 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:50:58.654503 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-02-10 09:50:58.654509 | orchestrator | 2025-02-10 09:50:58.654516 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-02-10 09:50:58.654522 | orchestrator | Monday 10 February 2025 09:47:20 +0000 (0:00:07.644) 0:06:02.046 ******* 2025-02-10 09:50:58.654528 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:50:58.654534 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:50:58.654540 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-10 09:50:58.654546 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-10 09:50:58.654553 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-10 09:50:58.654559 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:50:58.654565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-10 09:50:58.654571 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:50:58.654577 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:50:58.654597 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-10 09:50:58.654604 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:50:58.654614 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-10 09:50:58.654621 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-10 09:50:58.654627 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654633 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-10 09:50:58.654639 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654645 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-10 09:50:58.654651 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654657 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:50:58.654664 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:50:58.654670 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-10 09:50:58.654676 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:50:58.654682 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:50:58.654688 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-10 09:50:58.654694 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:50:58.654701 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:50:58.654710 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-10 09:50:58.654716 | orchestrator | 2025-02-10 09:50:58.654722 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-02-10 09:50:58.654728 | orchestrator | Monday 10 February 2025 09:47:33 +0000 (0:00:13.117) 0:06:15.164 ******* 2025-02-10 09:50:58.654735 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.654741 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.654747 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.654753 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654759 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654765 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654771 | orchestrator | 2025-02-10 09:50:58.654778 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-02-10 09:50:58.654784 | orchestrator | Monday 10 February 2025 09:47:34 +0000 (0:00:00.691) 0:06:15.855 ******* 2025-02-10 09:50:58.654790 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.654796 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.654802 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.654808 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654815 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654821 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654827 | orchestrator | 2025-02-10 09:50:58.654833 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-02-10 09:50:58.654839 | orchestrator | Monday 10 February 2025 09:47:35 +0000 (0:00:01.245) 0:06:17.102 ******* 2025-02-10 09:50:58.654845 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.654851 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.654857 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.654863 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.654869 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.654875 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.654882 | orchestrator | 2025-02-10 09:50:58.654888 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-02-10 09:50:58.654894 | orchestrator | Monday 10 February 2025 09:47:40 +0000 (0:00:04.156) 0:06:21.258 ******* 2025-02-10 09:50:58.654901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.654925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.654933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.654940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.654946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.654952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.654959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.654969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.654976 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.654998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655115 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.655122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655170 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.655177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655260 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.655273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655403 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.655414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655421 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.655427 | orchestrator | 2025-02-10 09:50:58.655434 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-02-10 09:50:58.655445 | orchestrator | Monday 10 February 2025 09:47:42 +0000 (0:00:02.257) 0:06:23.516 ******* 2025-02-10 09:50:58.655451 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-02-10 09:50:58.655457 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-02-10 09:50:58.655463 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.655470 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-02-10 09:50:58.655476 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-02-10 09:50:58.655483 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.655489 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-02-10 09:50:58.655495 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-02-10 09:50:58.655502 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.655511 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-02-10 09:50:58.655517 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-02-10 09:50:58.655523 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.655530 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-02-10 09:50:58.655536 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-02-10 09:50:58.655542 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.655548 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-02-10 09:50:58.655554 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-02-10 09:50:58.655560 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.655567 | orchestrator | 2025-02-10 09:50:58.655573 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-02-10 09:50:58.655579 | orchestrator | Monday 10 February 2025 09:47:43 +0000 (0:00:00.740) 0:06:24.256 ******* 2025-02-10 09:50:58.655585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-10 09:50:58.655640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-10 09:50:58.655667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-10 09:50:58.655836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-10 09:50:58.655855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-10 09:50:58.655972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-10 09:50:58.655997 | orchestrator | 2025-02-10 09:50:58.656004 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-10 09:50:58.656010 | orchestrator | Monday 10 February 2025 09:47:47 +0000 (0:00:04.636) 0:06:28.892 ******* 2025-02-10 09:50:58.656016 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.656023 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.656029 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.656035 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.656041 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.656047 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.656053 | orchestrator | 2025-02-10 09:50:58.656060 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:50:58.656066 | orchestrator | Monday 10 February 2025 09:47:48 +0000 (0:00:00.937) 0:06:29.830 ******* 2025-02-10 09:50:58.656072 | orchestrator | 2025-02-10 09:50:58.656078 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:50:58.656100 | orchestrator | Monday 10 February 2025 09:47:49 +0000 (0:00:00.389) 0:06:30.219 ******* 2025-02-10 09:50:58.656106 | orchestrator | 2025-02-10 09:50:58.656112 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:50:58.656119 | orchestrator | Monday 10 February 2025 09:47:49 +0000 (0:00:00.171) 0:06:30.390 ******* 2025-02-10 09:50:58.656125 | orchestrator | 2025-02-10 09:50:58.656131 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:50:58.656137 | orchestrator | Monday 10 February 2025 09:47:49 +0000 (0:00:00.369) 0:06:30.760 ******* 2025-02-10 09:50:58.656143 | orchestrator | 2025-02-10 09:50:58.656150 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:50:58.656156 | orchestrator | Monday 10 February 2025 09:47:49 +0000 (0:00:00.168) 0:06:30.928 ******* 2025-02-10 09:50:58.656162 | orchestrator | 2025-02-10 09:50:58.656168 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-10 09:50:58.656174 | orchestrator | Monday 10 February 2025 09:47:50 +0000 (0:00:00.377) 0:06:31.305 ******* 2025-02-10 09:50:58.656181 | orchestrator | 2025-02-10 09:50:58.656187 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-02-10 09:50:58.656193 | orchestrator | Monday 10 February 2025 09:47:50 +0000 (0:00:00.182) 0:06:31.487 ******* 2025-02-10 09:50:58.656199 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.656205 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.656211 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.656218 | orchestrator | 2025-02-10 09:50:58.656224 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-02-10 09:50:58.656234 | orchestrator | Monday 10 February 2025 09:48:02 +0000 (0:00:12.432) 0:06:43.920 ******* 2025-02-10 09:50:58.656240 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.656246 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.656252 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.656258 | orchestrator | 2025-02-10 09:50:58.656264 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-02-10 09:50:58.656273 | orchestrator | Monday 10 February 2025 09:48:14 +0000 (0:00:11.953) 0:06:55.874 ******* 2025-02-10 09:50:58.656280 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.656286 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.656292 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.656298 | orchestrator | 2025-02-10 09:50:58.656304 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-02-10 09:50:58.656310 | orchestrator | Monday 10 February 2025 09:48:34 +0000 (0:00:19.826) 0:07:15.700 ******* 2025-02-10 09:50:58.656316 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.656322 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.656329 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.656335 | orchestrator | 2025-02-10 09:50:58.656341 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-02-10 09:50:58.656347 | orchestrator | Monday 10 February 2025 09:48:58 +0000 (0:00:24.438) 0:07:40.138 ******* 2025-02-10 09:50:58.656353 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.656359 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.656366 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.656372 | orchestrator | 2025-02-10 09:50:58.656378 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-02-10 09:50:58.656384 | orchestrator | Monday 10 February 2025 09:49:00 +0000 (0:00:01.282) 0:07:41.420 ******* 2025-02-10 09:50:58.656390 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.656396 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.656403 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.656409 | orchestrator | 2025-02-10 09:50:58.656415 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-02-10 09:50:58.656421 | orchestrator | Monday 10 February 2025 09:49:01 +0000 (0:00:01.016) 0:07:42.437 ******* 2025-02-10 09:50:58.656427 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:50:58.656434 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:50:58.656440 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:50:58.656446 | orchestrator | 2025-02-10 09:50:58.656452 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute-ironic container] ************ 2025-02-10 09:50:58.656458 | orchestrator | Monday 10 February 2025 09:49:23 +0000 (0:00:21.881) 0:08:04.319 ******* 2025-02-10 09:50:58.656464 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.656471 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.656479 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.656486 | orchestrator | 2025-02-10 09:50:58.656492 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-02-10 09:50:58.656498 | orchestrator | Monday 10 February 2025 09:49:33 +0000 (0:00:10.302) 0:08:14.622 ******* 2025-02-10 09:50:58.656504 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.656510 | orchestrator | 2025-02-10 09:50:58.656516 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-02-10 09:50:58.656523 | orchestrator | Monday 10 February 2025 09:49:33 +0000 (0:00:00.132) 0:08:14.754 ******* 2025-02-10 09:50:58.656529 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.656535 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.656541 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.656547 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.656553 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.656561 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-02-10 09:50:58.656571 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:50:58.656577 | orchestrator | 2025-02-10 09:50:58.656583 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-02-10 09:50:58.656589 | orchestrator | Monday 10 February 2025 09:49:58 +0000 (0:00:25.156) 0:08:39.911 ******* 2025-02-10 09:50:58.656595 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.656602 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.656608 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.656614 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.656620 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.656626 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.656632 | orchestrator | 2025-02-10 09:50:58.656638 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-02-10 09:50:58.656644 | orchestrator | Monday 10 February 2025 09:50:16 +0000 (0:00:17.637) 0:08:57.548 ******* 2025-02-10 09:50:58.656650 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.656659 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.656665 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.656671 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.656677 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.656684 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-02-10 09:50:58.656690 | orchestrator | 2025-02-10 09:50:58.656696 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-10 09:50:58.656702 | orchestrator | Monday 10 February 2025 09:50:20 +0000 (0:00:04.470) 0:09:02.019 ******* 2025-02-10 09:50:58.656708 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:50:58.656714 | orchestrator | 2025-02-10 09:50:58.656721 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-10 09:50:58.656727 | orchestrator | Monday 10 February 2025 09:50:33 +0000 (0:00:12.527) 0:09:14.546 ******* 2025-02-10 09:50:58.656735 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:50:58.656742 | orchestrator | 2025-02-10 09:50:58.656748 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-02-10 09:50:58.656754 | orchestrator | Monday 10 February 2025 09:50:34 +0000 (0:00:01.420) 0:09:15.967 ******* 2025-02-10 09:50:58.656760 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.656766 | orchestrator | 2025-02-10 09:50:58.656772 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-02-10 09:50:58.656778 | orchestrator | Monday 10 February 2025 09:50:36 +0000 (0:00:01.447) 0:09:17.415 ******* 2025-02-10 09:50:58.656784 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:50:58.656790 | orchestrator | 2025-02-10 09:50:58.656797 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-02-10 09:50:58.656803 | orchestrator | Monday 10 February 2025 09:50:48 +0000 (0:00:12.467) 0:09:29.882 ******* 2025-02-10 09:50:58.656809 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:50:58.656815 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:50:58.656821 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:50:58.656827 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:50:58.656833 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:50:58.656839 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:50:58.656845 | orchestrator | 2025-02-10 09:50:58.656852 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-02-10 09:50:58.656858 | orchestrator | 2025-02-10 09:50:58.656864 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-02-10 09:50:58.656873 | orchestrator | Monday 10 February 2025 09:50:51 +0000 (0:00:03.096) 0:09:32.979 ******* 2025-02-10 09:50:58.656879 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:50:58.656885 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:50:58.656891 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:50:58.656897 | orchestrator | 2025-02-10 09:50:58.656908 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-02-10 09:50:58.656914 | orchestrator | 2025-02-10 09:50:58.656920 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-02-10 09:50:58.656926 | orchestrator | Monday 10 February 2025 09:50:52 +0000 (0:00:01.121) 0:09:34.100 ******* 2025-02-10 09:50:58.656932 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.656938 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.656944 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.656950 | orchestrator | 2025-02-10 09:50:58.656956 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-02-10 09:50:58.656962 | orchestrator | 2025-02-10 09:50:58.656968 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-02-10 09:50:58.656974 | orchestrator | Monday 10 February 2025 09:50:53 +0000 (0:00:00.724) 0:09:34.825 ******* 2025-02-10 09:50:58.656980 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-02-10 09:50:58.656987 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-02-10 09:50:58.656993 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-02-10 09:50:58.657001 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-02-10 09:50:58.657008 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-02-10 09:50:58.657014 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-02-10 09:50:58.657020 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:50:58.657026 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-02-10 09:50:58.657032 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-02-10 09:50:58.657038 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-02-10 09:50:58.657044 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-02-10 09:50:58.657050 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-02-10 09:50:58.657056 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-02-10 09:50:58.657063 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:50:58.657069 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-02-10 09:50:58.657075 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-02-10 09:50:58.657093 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-02-10 09:50:58.657100 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-02-10 09:50:58.657106 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-02-10 09:50:58.657112 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-02-10 09:50:58.657118 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:50:58.657124 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-02-10 09:50:58.657131 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-02-10 09:50:58.657137 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-02-10 09:50:58.657143 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-02-10 09:50:58.657149 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-02-10 09:50:58.657155 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-02-10 09:50:58.657161 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.657167 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-02-10 09:50:58.657173 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-02-10 09:50:58.657180 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-02-10 09:50:58.657186 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-02-10 09:50:58.657192 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-02-10 09:50:58.657198 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-02-10 09:50:58.657204 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.657214 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-02-10 09:50:58.657220 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-02-10 09:50:58.657226 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-02-10 09:50:58.657232 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-02-10 09:50:58.657238 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-02-10 09:50:58.657248 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-02-10 09:50:58.657254 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.657260 | orchestrator | 2025-02-10 09:50:58.657266 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-02-10 09:50:58.657273 | orchestrator | 2025-02-10 09:50:58.657279 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-02-10 09:50:58.657285 | orchestrator | Monday 10 February 2025 09:50:55 +0000 (0:00:01.623) 0:09:36.449 ******* 2025-02-10 09:50:58.657291 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-02-10 09:50:58.657297 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-02-10 09:50:58.657303 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.657309 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-02-10 09:50:58.657316 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-02-10 09:50:58.657322 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.657328 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-02-10 09:50:58.657334 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-02-10 09:50:58.657340 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.657346 | orchestrator | 2025-02-10 09:50:58.657353 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-02-10 09:50:58.657359 | orchestrator | 2025-02-10 09:50:58.657365 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-02-10 09:50:58.657371 | orchestrator | Monday 10 February 2025 09:50:55 +0000 (0:00:00.715) 0:09:37.165 ******* 2025-02-10 09:50:58.657377 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.657383 | orchestrator | 2025-02-10 09:50:58.657390 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-02-10 09:50:58.657396 | orchestrator | 2025-02-10 09:50:58.657402 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-02-10 09:50:58.657408 | orchestrator | Monday 10 February 2025 09:50:56 +0000 (0:00:00.631) 0:09:37.796 ******* 2025-02-10 09:50:58.657414 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:50:58.657420 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:50:58.657427 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:50:58.657433 | orchestrator | 2025-02-10 09:50:58.657439 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:50:58.657445 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:50:58.657454 | orchestrator | testbed-node-0 : ok=55  changed=36  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-02-10 09:51:01.674839 | orchestrator | testbed-node-1 : ok=28  changed=20  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-02-10 09:51:01.674954 | orchestrator | testbed-node-2 : ok=28  changed=20  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-02-10 09:51:01.674965 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-02-10 09:51:01.675009 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-02-10 09:51:01.675037 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-02-10 09:51:01.675044 | orchestrator | 2025-02-10 09:51:01.675052 | orchestrator | 2025-02-10 09:51:01.675060 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:51:01.675068 | orchestrator | Monday 10 February 2025 09:50:57 +0000 (0:00:00.490) 0:09:38.286 ******* 2025-02-10 09:51:01.675112 | orchestrator | =============================================================================== 2025-02-10 09:51:01.675121 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.51s 2025-02-10 09:51:01.675128 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 25.16s 2025-02-10 09:51:01.675135 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.44s 2025-02-10 09:51:01.675142 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.37s 2025-02-10 09:51:01.675149 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.88s 2025-02-10 09:51:01.675156 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.83s 2025-02-10 09:51:01.675163 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 18.08s 2025-02-10 09:51:01.675171 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 17.64s 2025-02-10 09:51:01.675178 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.75s 2025-02-10 09:51:01.675185 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 13.12s 2025-02-10 09:51:01.675192 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.53s 2025-02-10 09:51:01.675203 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 12.48s 2025-02-10 09:51:01.675210 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.47s 2025-02-10 09:51:01.675217 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.43s 2025-02-10 09:51:01.675224 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.95s 2025-02-10 09:51:01.675231 | orchestrator | nova-cell : Copying over nova.conf ------------------------------------- 11.76s 2025-02-10 09:51:01.675238 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.08s 2025-02-10 09:51:01.675245 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.74s 2025-02-10 09:51:01.675252 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.42s 2025-02-10 09:51:01.675259 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 10.31s 2025-02-10 09:51:01.675266 | orchestrator | 2025-02-10 09:50:58 | INFO  | Wait 1 second(s) until the next check 2025-02-10 09:51:01.675289 | orchestrator | 2025-02-10 09:51:01 | INFO  | Task 9b9f4e0d-2cb6-449b-aea1-17233a293fd8 is in state SUCCESS 2025-02-10 09:51:01.676109 | orchestrator | 2025-02-10 09:51:01.676132 | orchestrator | 2025-02-10 09:51:01.676140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:51:01.676147 | orchestrator | 2025-02-10 09:51:01.676154 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:51:01.676161 | orchestrator | Monday 10 February 2025 09:45:05 +0000 (0:00:00.407) 0:00:00.407 ******* 2025-02-10 09:51:01.676172 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.676185 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:51:01.676196 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:51:01.676207 | orchestrator | 2025-02-10 09:51:01.676219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:51:01.676227 | orchestrator | Monday 10 February 2025 09:45:06 +0000 (0:00:00.448) 0:00:00.855 ******* 2025-02-10 09:51:01.676234 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-02-10 09:51:01.676242 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-02-10 09:51:01.676259 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-02-10 09:51:01.676266 | orchestrator | 2025-02-10 09:51:01.676273 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-02-10 09:51:01.676280 | orchestrator | 2025-02-10 09:51:01.676287 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:51:01.676294 | orchestrator | Monday 10 February 2025 09:45:06 +0000 (0:00:00.535) 0:00:01.390 ******* 2025-02-10 09:51:01.676302 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:51:01.676310 | orchestrator | 2025-02-10 09:51:01.676317 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-02-10 09:51:01.676324 | orchestrator | Monday 10 February 2025 09:45:07 +0000 (0:00:00.751) 0:00:02.142 ******* 2025-02-10 09:51:01.676332 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-02-10 09:51:01.676339 | orchestrator | 2025-02-10 09:51:01.676346 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-02-10 09:51:01.676353 | orchestrator | Monday 10 February 2025 09:45:11 +0000 (0:00:04.023) 0:00:06.165 ******* 2025-02-10 09:51:01.676360 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-02-10 09:51:01.676367 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-02-10 09:51:01.676374 | orchestrator | 2025-02-10 09:51:01.676381 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-02-10 09:51:01.676388 | orchestrator | Monday 10 February 2025 09:45:19 +0000 (0:00:07.704) 0:00:13.870 ******* 2025-02-10 09:51:01.676396 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-10 09:51:01.676403 | orchestrator | 2025-02-10 09:51:01.676410 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-02-10 09:51:01.676417 | orchestrator | Monday 10 February 2025 09:45:22 +0000 (0:00:03.357) 0:00:17.227 ******* 2025-02-10 09:51:01.676424 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-10 09:51:01.676431 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-02-10 09:51:01.676438 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-02-10 09:51:01.676445 | orchestrator | 2025-02-10 09:51:01.676452 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-02-10 09:51:01.676459 | orchestrator | Monday 10 February 2025 09:45:31 +0000 (0:00:09.127) 0:00:26.355 ******* 2025-02-10 09:51:01.676466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-10 09:51:01.676520 | orchestrator | 2025-02-10 09:51:01.676566 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-02-10 09:51:01.676574 | orchestrator | Monday 10 February 2025 09:45:36 +0000 (0:00:04.362) 0:00:30.717 ******* 2025-02-10 09:51:01.676581 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-02-10 09:51:01.676589 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-02-10 09:51:01.676596 | orchestrator | 2025-02-10 09:51:01.676603 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-02-10 09:51:01.676610 | orchestrator | Monday 10 February 2025 09:45:43 +0000 (0:00:07.443) 0:00:38.160 ******* 2025-02-10 09:51:01.676617 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-02-10 09:51:01.676624 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-02-10 09:51:01.676630 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-02-10 09:51:01.676638 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-02-10 09:51:01.676644 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-02-10 09:51:01.676651 | orchestrator | 2025-02-10 09:51:01.676658 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:51:01.676671 | orchestrator | Monday 10 February 2025 09:46:01 +0000 (0:00:18.338) 0:00:56.498 ******* 2025-02-10 09:51:01.676684 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:51:01.676691 | orchestrator | 2025-02-10 09:51:01.676700 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-02-10 09:51:01.676708 | orchestrator | Monday 10 February 2025 09:46:02 +0000 (0:00:00.687) 0:00:57.186 ******* 2025-02-10 09:51:01.676716 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.676724 | orchestrator | 2025-02-10 09:51:01.676732 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-02-10 09:51:01.676740 | orchestrator | Monday 10 February 2025 09:46:36 +0000 (0:00:33.670) 0:01:30.857 ******* 2025-02-10 09:51:01.676748 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.676756 | orchestrator | 2025-02-10 09:51:01.676764 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-02-10 09:51:01.676779 | orchestrator | Monday 10 February 2025 09:46:41 +0000 (0:00:05.520) 0:01:36.377 ******* 2025-02-10 09:51:01.676788 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.676796 | orchestrator | 2025-02-10 09:51:01.676805 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-02-10 09:51:01.676817 | orchestrator | Monday 10 February 2025 09:46:44 +0000 (0:00:03.210) 0:01:39.587 ******* 2025-02-10 09:51:01.676828 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-02-10 09:51:01.676840 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-02-10 09:51:01.676848 | orchestrator | 2025-02-10 09:51:01.676856 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-02-10 09:51:01.676864 | orchestrator | Monday 10 February 2025 09:46:55 +0000 (0:00:11.061) 0:01:50.648 ******* 2025-02-10 09:51:01.676872 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-02-10 09:51:01.676880 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-02-10 09:51:01.676889 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-02-10 09:51:01.676898 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-02-10 09:51:01.676906 | orchestrator | 2025-02-10 09:51:01.676914 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-02-10 09:51:01.677263 | orchestrator | Monday 10 February 2025 09:47:14 +0000 (0:00:18.105) 0:02:08.754 ******* 2025-02-10 09:51:01.677288 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677296 | orchestrator | 2025-02-10 09:51:01.677303 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-02-10 09:51:01.677311 | orchestrator | Monday 10 February 2025 09:47:20 +0000 (0:00:06.436) 0:02:15.190 ******* 2025-02-10 09:51:01.677318 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677325 | orchestrator | 2025-02-10 09:51:01.677332 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-02-10 09:51:01.677339 | orchestrator | Monday 10 February 2025 09:47:27 +0000 (0:00:06.706) 0:02:21.897 ******* 2025-02-10 09:51:01.677346 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.677354 | orchestrator | 2025-02-10 09:51:01.677361 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-02-10 09:51:01.677368 | orchestrator | Monday 10 February 2025 09:47:27 +0000 (0:00:00.333) 0:02:22.231 ******* 2025-02-10 09:51:01.677375 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677382 | orchestrator | 2025-02-10 09:51:01.677389 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:51:01.677396 | orchestrator | Monday 10 February 2025 09:47:33 +0000 (0:00:05.937) 0:02:28.168 ******* 2025-02-10 09:51:01.677403 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:51:01.677420 | orchestrator | 2025-02-10 09:51:01.677427 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-02-10 09:51:01.677434 | orchestrator | Monday 10 February 2025 09:47:34 +0000 (0:00:01.071) 0:02:29.240 ******* 2025-02-10 09:51:01.677441 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677448 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677455 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677462 | orchestrator | 2025-02-10 09:51:01.677469 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-02-10 09:51:01.677476 | orchestrator | Monday 10 February 2025 09:47:41 +0000 (0:00:06.803) 0:02:36.043 ******* 2025-02-10 09:51:01.677483 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677490 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677497 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677504 | orchestrator | 2025-02-10 09:51:01.677511 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-02-10 09:51:01.677518 | orchestrator | Monday 10 February 2025 09:47:45 +0000 (0:00:04.219) 0:02:40.262 ******* 2025-02-10 09:51:01.677525 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677532 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677538 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677545 | orchestrator | 2025-02-10 09:51:01.677552 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-02-10 09:51:01.677559 | orchestrator | Monday 10 February 2025 09:47:46 +0000 (0:00:01.049) 0:02:41.311 ******* 2025-02-10 09:51:01.677566 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:51:01.677574 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.677587 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:51:01.677655 | orchestrator | 2025-02-10 09:51:01.677787 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-02-10 09:51:01.677795 | orchestrator | Monday 10 February 2025 09:47:49 +0000 (0:00:02.578) 0:02:43.889 ******* 2025-02-10 09:51:01.677801 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677807 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677813 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677819 | orchestrator | 2025-02-10 09:51:01.677826 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-02-10 09:51:01.677836 | orchestrator | Monday 10 February 2025 09:47:50 +0000 (0:00:01.698) 0:02:45.588 ******* 2025-02-10 09:51:01.677843 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677849 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677855 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677861 | orchestrator | 2025-02-10 09:51:01.677867 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-02-10 09:51:01.677874 | orchestrator | Monday 10 February 2025 09:47:52 +0000 (0:00:01.767) 0:02:47.355 ******* 2025-02-10 09:51:01.677880 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677886 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677892 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677898 | orchestrator | 2025-02-10 09:51:01.677922 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-02-10 09:51:01.677930 | orchestrator | Monday 10 February 2025 09:47:54 +0000 (0:00:02.270) 0:02:49.625 ******* 2025-02-10 09:51:01.677936 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.677942 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.677948 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.677954 | orchestrator | 2025-02-10 09:51:01.677961 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-02-10 09:51:01.677967 | orchestrator | Monday 10 February 2025 09:47:56 +0000 (0:00:01.671) 0:02:51.297 ******* 2025-02-10 09:51:01.677973 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.677979 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:51:01.677985 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:51:01.678001 | orchestrator | 2025-02-10 09:51:01.678007 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-02-10 09:51:01.678048 | orchestrator | Monday 10 February 2025 09:47:57 +0000 (0:00:00.637) 0:02:51.935 ******* 2025-02-10 09:51:01.678056 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:51:01.678062 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.678069 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:51:01.678075 | orchestrator | 2025-02-10 09:51:01.678107 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:51:01.678114 | orchestrator | Monday 10 February 2025 09:48:01 +0000 (0:00:04.094) 0:02:56.029 ******* 2025-02-10 09:51:01.678121 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:51:01.678127 | orchestrator | 2025-02-10 09:51:01.678133 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-02-10 09:51:01.678140 | orchestrator | Monday 10 February 2025 09:48:02 +0000 (0:00:00.919) 0:02:56.949 ******* 2025-02-10 09:51:01.678146 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.678152 | orchestrator | 2025-02-10 09:51:01.678158 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-02-10 09:51:01.678165 | orchestrator | Monday 10 February 2025 09:48:06 +0000 (0:00:04.451) 0:03:01.400 ******* 2025-02-10 09:51:01.678171 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.678177 | orchestrator | 2025-02-10 09:51:01.678183 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-02-10 09:51:01.678189 | orchestrator | Monday 10 February 2025 09:48:10 +0000 (0:00:03.717) 0:03:05.118 ******* 2025-02-10 09:51:01.678196 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-02-10 09:51:01.678202 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-02-10 09:51:01.678208 | orchestrator | 2025-02-10 09:51:01.678214 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-02-10 09:51:01.678221 | orchestrator | Monday 10 February 2025 09:48:19 +0000 (0:00:08.808) 0:03:13.926 ******* 2025-02-10 09:51:01.678227 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.678233 | orchestrator | 2025-02-10 09:51:01.678239 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-02-10 09:51:01.678245 | orchestrator | Monday 10 February 2025 09:48:23 +0000 (0:00:04.348) 0:03:18.275 ******* 2025-02-10 09:51:01.678252 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:51:01.678258 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:51:01.678264 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:51:01.678270 | orchestrator | 2025-02-10 09:51:01.678277 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-02-10 09:51:01.678283 | orchestrator | Monday 10 February 2025 09:48:24 +0000 (0:00:00.401) 0:03:18.676 ******* 2025-02-10 09:51:01.678291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.678348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.678363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.678371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.678380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.678387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.678395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678502 | orchestrator | 2025-02-10 09:51:01.678509 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-02-10 09:51:01.678528 | orchestrator | Monday 10 February 2025 09:48:27 +0000 (0:00:03.167) 0:03:21.844 ******* 2025-02-10 09:51:01.678536 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.678543 | orchestrator | 2025-02-10 09:51:01.678550 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-02-10 09:51:01.678557 | orchestrator | Monday 10 February 2025 09:48:27 +0000 (0:00:00.122) 0:03:21.966 ******* 2025-02-10 09:51:01.678564 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.678571 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:51:01.678578 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:51:01.678585 | orchestrator | 2025-02-10 09:51:01.678592 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-02-10 09:51:01.678599 | orchestrator | Monday 10 February 2025 09:48:27 +0000 (0:00:00.364) 0:03:22.331 ******* 2025-02-10 09:51:01.678606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.678616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.678623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.678631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.678642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.678649 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.678671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.678685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.678693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.678700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.678707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.678718 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:51:01.678726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.678745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.678758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.678766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.678773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.678780 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:51:01.678791 | orchestrator | 2025-02-10 09:51:01.678798 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:51:01.678806 | orchestrator | Monday 10 February 2025 09:48:28 +0000 (0:00:00.938) 0:03:23.269 ******* 2025-02-10 09:51:01.678813 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:51:01.678819 | orchestrator | 2025-02-10 09:51:01.678826 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-02-10 09:51:01.678835 | orchestrator | Monday 10 February 2025 09:48:29 +0000 (0:00:00.588) 0:03:23.857 ******* 2025-02-10 09:51:01.678842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.678869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.678877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.678884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.678894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.678900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.678907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.678997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679003 | orchestrator | 2025-02-10 09:51:01.679010 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-02-10 09:51:01.679017 | orchestrator | Monday 10 February 2025 09:48:33 +0000 (0:00:04.591) 0:03:28.449 ******* 2025-02-10 09:51:01.679023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.679030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.679043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.679062 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.679135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.679145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.679152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.679177 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:51:01.679183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.679201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.679208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.679233 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:51:01.679239 | orchestrator | 2025-02-10 09:51:01.679245 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-02-10 09:51:01.679252 | orchestrator | Monday 10 February 2025 09:48:34 +0000 (0:00:00.868) 0:03:29.317 ******* 2025-02-10 09:51:01.679258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.679265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.679281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.679305 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.679311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.679318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.679327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.679359 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:51:01.679365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-10 09:51:01.679372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-10 09:51:01.679379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-10 09:51:01.679398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-10 09:51:01.679405 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:51:01.679411 | orchestrator | 2025-02-10 09:51:01.679420 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-02-10 09:51:01.679427 | orchestrator | Monday 10 February 2025 09:48:36 +0000 (0:00:01.613) 0:03:30.931 ******* 2025-02-10 09:51:01.679433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.679443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.679450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.679461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.679468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.679477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent'2025-02-10 09:51:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:01.679488 | orchestrator | , 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.679497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679569 | orchestrator | 2025-02-10 09:51:01.679575 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-02-10 09:51:01.679582 | orchestrator | Monday 10 February 2025 09:48:42 +0000 (0:00:06.162) 0:03:37.093 ******* 2025-02-10 09:51:01.679588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-10 09:51:01.679594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-10 09:51:01.679601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-10 09:51:01.679607 | orchestrator | 2025-02-10 09:51:01.679613 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-02-10 09:51:01.679620 | orchestrator | Monday 10 February 2025 09:48:45 +0000 (0:00:02.588) 0:03:39.681 ******* 2025-02-10 09:51:01.679633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.679643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.679653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.679659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.679666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.679672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.679684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.679763 | orchestrator | 2025-02-10 09:51:01.679769 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-02-10 09:51:01.679776 | orchestrator | Monday 10 February 2025 09:49:07 +0000 (0:00:22.640) 0:04:02.321 ******* 2025-02-10 09:51:01.679782 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.679788 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.679794 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.679800 | orchestrator | 2025-02-10 09:51:01.679807 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-02-10 09:51:01.679813 | orchestrator | Monday 10 February 2025 09:49:09 +0000 (0:00:02.169) 0:04:04.491 ******* 2025-02-10 09:51:01.679819 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.679825 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.679832 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.679838 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.679844 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.679850 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.679856 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.679862 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.679868 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.679874 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-10 09:51:01.679883 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-10 09:51:01.679889 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-10 09:51:01.679896 | orchestrator | 2025-02-10 09:51:01.679902 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-02-10 09:51:01.679908 | orchestrator | Monday 10 February 2025 09:49:17 +0000 (0:00:07.988) 0:04:12.479 ******* 2025-02-10 09:51:01.679914 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.679920 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.679926 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.679932 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.679938 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.679945 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.679951 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.679960 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.679966 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.679972 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-10 09:51:01.679978 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-10 09:51:01.679985 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-10 09:51:01.679995 | orchestrator | 2025-02-10 09:51:01.680001 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-02-10 09:51:01.680007 | orchestrator | Monday 10 February 2025 09:49:25 +0000 (0:00:07.741) 0:04:20.220 ******* 2025-02-10 09:51:01.680013 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.680019 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.680025 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-10 09:51:01.680031 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.680038 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.680044 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-10 09:51:01.680050 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.680056 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.680062 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-10 09:51:01.680068 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-10 09:51:01.680074 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-10 09:51:01.680095 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-10 09:51:01.680102 | orchestrator | 2025-02-10 09:51:01.680108 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-02-10 09:51:01.680114 | orchestrator | Monday 10 February 2025 09:49:35 +0000 (0:00:09.923) 0:04:30.143 ******* 2025-02-10 09:51:01.680125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.680132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.680145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-10 09:51:01.680156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.680163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.680169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-10 09:51:01.680185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-10 09:51:01.680250 | orchestrator | 2025-02-10 09:51:01.680256 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-10 09:51:01.680263 | orchestrator | Monday 10 February 2025 09:49:41 +0000 (0:00:06.481) 0:04:36.624 ******* 2025-02-10 09:51:01.680269 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:51:01.680275 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:51:01.680284 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:51:01.680290 | orchestrator | 2025-02-10 09:51:01.680297 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-02-10 09:51:01.680303 | orchestrator | Monday 10 February 2025 09:49:42 +0000 (0:00:00.328) 0:04:36.953 ******* 2025-02-10 09:51:01.680309 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680315 | orchestrator | 2025-02-10 09:51:01.680321 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-02-10 09:51:01.680327 | orchestrator | Monday 10 February 2025 09:49:44 +0000 (0:00:02.258) 0:04:39.212 ******* 2025-02-10 09:51:01.680333 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680339 | orchestrator | 2025-02-10 09:51:01.680345 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-02-10 09:51:01.680352 | orchestrator | Monday 10 February 2025 09:49:46 +0000 (0:00:02.158) 0:04:41.370 ******* 2025-02-10 09:51:01.680358 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680364 | orchestrator | 2025-02-10 09:51:01.680370 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-02-10 09:51:01.680376 | orchestrator | Monday 10 February 2025 09:49:49 +0000 (0:00:02.473) 0:04:43.844 ******* 2025-02-10 09:51:01.680382 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680388 | orchestrator | 2025-02-10 09:51:01.680397 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-02-10 09:51:01.680403 | orchestrator | Monday 10 February 2025 09:49:51 +0000 (0:00:02.515) 0:04:46.360 ******* 2025-02-10 09:51:01.680410 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680416 | orchestrator | 2025-02-10 09:51:01.680422 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-10 09:51:01.680428 | orchestrator | Monday 10 February 2025 09:50:09 +0000 (0:00:17.758) 0:05:04.118 ******* 2025-02-10 09:51:01.680434 | orchestrator | 2025-02-10 09:51:01.680440 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-10 09:51:01.680447 | orchestrator | Monday 10 February 2025 09:50:09 +0000 (0:00:00.112) 0:05:04.231 ******* 2025-02-10 09:51:01.680453 | orchestrator | 2025-02-10 09:51:01.680459 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-10 09:51:01.680465 | orchestrator | Monday 10 February 2025 09:50:09 +0000 (0:00:00.076) 0:05:04.308 ******* 2025-02-10 09:51:01.680471 | orchestrator | 2025-02-10 09:51:01.680477 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-02-10 09:51:01.680484 | orchestrator | Monday 10 February 2025 09:50:09 +0000 (0:00:00.057) 0:05:04.365 ******* 2025-02-10 09:51:01.680490 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680496 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.680507 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.680513 | orchestrator | 2025-02-10 09:51:01.680520 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-02-10 09:51:01.680526 | orchestrator | Monday 10 February 2025 09:50:26 +0000 (0:00:17.083) 0:05:21.449 ******* 2025-02-10 09:51:01.680532 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680538 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.680544 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.680550 | orchestrator | 2025-02-10 09:51:01.680556 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-02-10 09:51:01.680563 | orchestrator | Monday 10 February 2025 09:50:33 +0000 (0:00:06.736) 0:05:28.185 ******* 2025-02-10 09:51:01.680569 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680575 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.680581 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.680587 | orchestrator | 2025-02-10 09:51:01.680593 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-02-10 09:51:01.680600 | orchestrator | Monday 10 February 2025 09:50:39 +0000 (0:00:05.962) 0:05:34.147 ******* 2025-02-10 09:51:01.680606 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680615 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.680622 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.680628 | orchestrator | 2025-02-10 09:51:01.680634 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-02-10 09:51:01.680640 | orchestrator | Monday 10 February 2025 09:50:50 +0000 (0:00:11.372) 0:05:45.520 ******* 2025-02-10 09:51:01.680646 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:51:01.680653 | orchestrator | changed: [testbed-node-2] 2025-02-10 09:51:01.680659 | orchestrator | changed: [testbed-node-1] 2025-02-10 09:51:01.680665 | orchestrator | 2025-02-10 09:51:01.680674 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:51:04.709922 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-10 09:51:04.710131 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:51:04.710155 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-10 09:51:04.710168 | orchestrator | 2025-02-10 09:51:04.710182 | orchestrator | 2025-02-10 09:51:04.710195 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:51:04.710210 | orchestrator | Monday 10 February 2025 09:51:01 +0000 (0:00:10.437) 0:05:55.957 ******* 2025-02-10 09:51:04.710223 | orchestrator | =============================================================================== 2025-02-10 09:51:04.710235 | orchestrator | octavia : Create amphora flavor ---------------------------------------- 33.67s 2025-02-10 09:51:04.710248 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 22.64s 2025-02-10 09:51:04.710260 | orchestrator | octavia : Adding octavia related roles --------------------------------- 18.34s 2025-02-10 09:51:04.710273 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.11s 2025-02-10 09:51:04.710285 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 17.76s 2025-02-10 09:51:04.710297 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.08s 2025-02-10 09:51:04.710309 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 11.37s 2025-02-10 09:51:04.710322 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.06s 2025-02-10 09:51:04.710333 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.44s 2025-02-10 09:51:04.710346 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 9.92s 2025-02-10 09:51:04.710382 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.12s 2025-02-10 09:51:04.710396 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.81s 2025-02-10 09:51:04.710408 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 7.99s 2025-02-10 09:51:04.710421 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 7.74s 2025-02-10 09:51:04.710433 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.70s 2025-02-10 09:51:04.710446 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.44s 2025-02-10 09:51:04.710459 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.80s 2025-02-10 09:51:04.710472 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.74s 2025-02-10 09:51:04.710484 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.71s 2025-02-10 09:51:04.710497 | orchestrator | octavia : Check octavia containers -------------------------------------- 6.48s 2025-02-10 09:51:04.710528 | orchestrator | 2025-02-10 09:51:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:07.741322 | orchestrator | 2025-02-10 09:51:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:10.780335 | orchestrator | 2025-02-10 09:51:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:13.818901 | orchestrator | 2025-02-10 09:51:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:16.858824 | orchestrator | 2025-02-10 09:51:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:19.897463 | orchestrator | 2025-02-10 09:51:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:22.934707 | orchestrator | 2025-02-10 09:51:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:25.971257 | orchestrator | 2025-02-10 09:51:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:29.008555 | orchestrator | 2025-02-10 09:51:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:32.038384 | orchestrator | 2025-02-10 09:51:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:35.075960 | orchestrator | 2025-02-10 09:51:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:38.116311 | orchestrator | 2025-02-10 09:51:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:41.153870 | orchestrator | 2025-02-10 09:51:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:44.187108 | orchestrator | 2025-02-10 09:51:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:47.235239 | orchestrator | 2025-02-10 09:51:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:50.274363 | orchestrator | 2025-02-10 09:51:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:53.312916 | orchestrator | 2025-02-10 09:51:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:56.348646 | orchestrator | 2025-02-10 09:51:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:51:59.387585 | orchestrator | 2025-02-10 09:51:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-10 09:52:02.425095 | orchestrator | 2025-02-10 09:52:02.707186 | orchestrator | 2025-02-10 09:52:02.711462 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Feb 10 09:52:02 UTC 2025 2025-02-10 09:52:02.714360 | orchestrator | 2025-02-10 09:52:13.637483 | orchestrator | changed 2025-02-10 09:52:13.970506 | 2025-02-10 09:52:13.970653 | TASK [Bootstrap services] 2025-02-10 09:52:14.636740 | orchestrator | 2025-02-10 09:52:14.646558 | orchestrator | # BOOTSTRAP 2025-02-10 09:52:14.646658 | orchestrator | 2025-02-10 09:52:14.646673 | orchestrator | + set -e 2025-02-10 09:52:14.646711 | orchestrator | + echo 2025-02-10 09:52:14.646724 | orchestrator | + echo '# BOOTSTRAP' 2025-02-10 09:52:14.646737 | orchestrator | + echo 2025-02-10 09:52:14.646754 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-02-10 09:52:14.646783 | orchestrator | + set -e 2025-02-10 09:52:22.167721 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-02-10 09:52:22.167878 | orchestrator | 2025-02-10 09:52:22 | INFO  | Flavor SCS-1V-4 created 2025-02-10 09:52:22.353616 | orchestrator | 2025-02-10 09:52:22 | INFO  | Flavor SCS-2V-8 created 2025-02-10 09:52:22.559361 | orchestrator | 2025-02-10 09:52:22 | INFO  | Flavor SCS-4V-16 created 2025-02-10 09:52:22.708605 | orchestrator | 2025-02-10 09:52:22 | INFO  | Flavor SCS-8V-32 created 2025-02-10 09:52:22.841038 | orchestrator | 2025-02-10 09:52:22 | INFO  | Flavor SCS-1V-2 created 2025-02-10 09:52:22.987926 | orchestrator | 2025-02-10 09:52:22 | INFO  | Flavor SCS-2V-4 created 2025-02-10 09:52:23.109123 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-4V-8 created 2025-02-10 09:52:23.225760 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-8V-16 created 2025-02-10 09:52:23.365144 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-16V-32 created 2025-02-10 09:52:23.510947 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-1V-8 created 2025-02-10 09:52:23.642730 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-2V-16 created 2025-02-10 09:52:23.783569 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-4V-32 created 2025-02-10 09:52:23.936602 | orchestrator | 2025-02-10 09:52:23 | INFO  | Flavor SCS-1L-1 created 2025-02-10 09:52:24.068067 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-2V-4-20s created 2025-02-10 09:52:24.184906 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-4V-16-100s created 2025-02-10 09:52:24.312653 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-1V-4-10 created 2025-02-10 09:52:24.432138 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-2V-8-20 created 2025-02-10 09:52:24.564463 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-4V-16-50 created 2025-02-10 09:52:24.734525 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-8V-32-100 created 2025-02-10 09:52:24.874452 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-1V-2-5 created 2025-02-10 09:52:24.996253 | orchestrator | 2025-02-10 09:52:24 | INFO  | Flavor SCS-2V-4-10 created 2025-02-10 09:52:25.120784 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-4V-8-20 created 2025-02-10 09:52:25.249818 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-8V-16-50 created 2025-02-10 09:52:25.395708 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-16V-32-100 created 2025-02-10 09:52:25.538744 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-1V-8-20 created 2025-02-10 09:52:25.645280 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-2V-16-50 created 2025-02-10 09:52:25.767803 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-4V-32-100 created 2025-02-10 09:52:25.898714 | orchestrator | 2025-02-10 09:52:25 | INFO  | Flavor SCS-1L-1-5 created 2025-02-10 09:52:28.199311 | orchestrator | 2025-02-10 09:52:28 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-02-10 09:52:28.296956 | orchestrator | 2025-02-10 09:52:28 | INFO  | Task 89e31a1d-834f-4cc6-8522-99d8de315c59 (bootstrap-basic) was prepared for execution. 2025-02-10 09:52:32.347162 | orchestrator | 2025-02-10 09:52:28 | INFO  | It takes a moment until task 89e31a1d-834f-4cc6-8522-99d8de315c59 (bootstrap-basic) has been started and output is visible here. 2025-02-10 09:52:32.347381 | orchestrator | 2025-02-10 09:52:32.347537 | orchestrator | PLAY [Prepare masquerading on the manager node] ******************************** 2025-02-10 09:52:32.347954 | orchestrator | 2025-02-10 09:52:32.349064 | orchestrator | TASK [Accept FORWARD on the management interface (incoming)] ******************* 2025-02-10 09:52:32.351599 | orchestrator | Monday 10 February 2025 09:52:32 +0000 (0:00:00.190) 0:00:00.190 ******* 2025-02-10 09:52:33.046113 | orchestrator | ok: [testbed-manager] 2025-02-10 09:52:33.046444 | orchestrator | 2025-02-10 09:52:33.047010 | orchestrator | TASK [Accept FORWARD on the management interface (outgoing)] ******************* 2025-02-10 09:52:33.048119 | orchestrator | Monday 10 February 2025 09:52:33 +0000 (0:00:00.699) 0:00:00.890 ******* 2025-02-10 09:52:33.593144 | orchestrator | ok: [testbed-manager] 2025-02-10 09:52:33.594268 | orchestrator | 2025-02-10 09:52:33.599084 | orchestrator | TASK [Masquerade traffic on the management interface] ************************** 2025-02-10 09:52:33.599560 | orchestrator | Monday 10 February 2025 09:52:33 +0000 (0:00:00.547) 0:00:01.438 ******* 2025-02-10 09:52:34.084181 | orchestrator | ok: [testbed-manager] 2025-02-10 09:52:34.085209 | orchestrator | 2025-02-10 09:52:34.086350 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-02-10 09:52:34.087039 | orchestrator | 2025-02-10 09:52:34.091095 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-10 09:52:35.702192 | orchestrator | Monday 10 February 2025 09:52:34 +0000 (0:00:00.492) 0:00:01.930 ******* 2025-02-10 09:52:35.702377 | orchestrator | ok: [localhost] 2025-02-10 09:52:35.702421 | orchestrator | 2025-02-10 09:52:35.702745 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-02-10 09:52:35.703386 | orchestrator | Monday 10 February 2025 09:52:35 +0000 (0:00:01.617) 0:00:03.548 ******* 2025-02-10 09:52:45.644037 | orchestrator | ok: [localhost] 2025-02-10 09:52:45.644368 | orchestrator | 2025-02-10 09:52:45.645162 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-02-10 09:52:52.524231 | orchestrator | Monday 10 February 2025 09:52:45 +0000 (0:00:09.939) 0:00:13.488 ******* 2025-02-10 09:52:52.524389 | orchestrator | changed: [localhost] 2025-02-10 09:52:52.525790 | orchestrator | 2025-02-10 09:52:59.032607 | orchestrator | TASK [Get volume type local] *************************************************** 2025-02-10 09:52:59.032783 | orchestrator | Monday 10 February 2025 09:52:52 +0000 (0:00:06.880) 0:00:20.368 ******* 2025-02-10 09:52:59.032844 | orchestrator | ok: [localhost] 2025-02-10 09:52:59.033047 | orchestrator | 2025-02-10 09:52:59.033440 | orchestrator | TASK [Create volume type local] ************************************************ 2025-02-10 09:52:59.035412 | orchestrator | Monday 10 February 2025 09:52:59 +0000 (0:00:06.508) 0:00:26.876 ******* 2025-02-10 09:53:04.798193 | orchestrator | changed: [localhost] 2025-02-10 09:53:04.798541 | orchestrator | 2025-02-10 09:53:04.798587 | orchestrator | TASK [Create public network] *************************************************** 2025-02-10 09:53:09.990397 | orchestrator | Monday 10 February 2025 09:53:04 +0000 (0:00:05.765) 0:00:32.641 ******* 2025-02-10 09:53:09.990604 | orchestrator | changed: [localhost] 2025-02-10 09:53:09.990681 | orchestrator | 2025-02-10 09:53:09.990701 | orchestrator | TASK [Set public network to default] ******************************************* 2025-02-10 09:53:09.990721 | orchestrator | Monday 10 February 2025 09:53:09 +0000 (0:00:05.192) 0:00:37.834 ******* 2025-02-10 09:53:15.485464 | orchestrator | changed: [localhost] 2025-02-10 09:53:19.817536 | orchestrator | 2025-02-10 09:53:19.817721 | orchestrator | TASK [Create public subnet] **************************************************** 2025-02-10 09:53:19.817744 | orchestrator | Monday 10 February 2025 09:53:15 +0000 (0:00:05.494) 0:00:43.329 ******* 2025-02-10 09:53:19.817795 | orchestrator | changed: [localhost] 2025-02-10 09:53:19.817870 | orchestrator | 2025-02-10 09:53:19.817892 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-02-10 09:53:19.820310 | orchestrator | Monday 10 February 2025 09:53:19 +0000 (0:00:04.332) 0:00:47.661 ******* 2025-02-10 09:53:23.610147 | orchestrator | changed: [localhost] 2025-02-10 09:53:23.610355 | orchestrator | 2025-02-10 09:53:23.610417 | orchestrator | TASK [Create manager role] ***************************************************** 2025-02-10 09:53:23.610879 | orchestrator | Monday 10 February 2025 09:53:23 +0000 (0:00:03.793) 0:00:51.454 ******* 2025-02-10 09:53:27.061349 | orchestrator | ok: [localhost] 2025-02-10 09:53:27.061478 | orchestrator | 2025-02-10 09:53:27.061504 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:53:27.062133 | orchestrator | 2025-02-10 09:53:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:53:27.062909 | orchestrator | 2025-02-10 09:53:27 | INFO  | Please wait and do not abort execution. 2025-02-10 09:53:27.062973 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:53:27.063679 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 09:53:27.064310 | orchestrator | 2025-02-10 09:53:27.065649 | orchestrator | 2025-02-10 09:53:27.065795 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:53:27.066700 | orchestrator | Monday 10 February 2025 09:53:27 +0000 (0:00:03.449) 0:00:54.904 ******* 2025-02-10 09:53:27.067176 | orchestrator | =============================================================================== 2025-02-10 09:53:27.067480 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.94s 2025-02-10 09:53:27.067727 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.88s 2025-02-10 09:53:27.068101 | orchestrator | Get volume type local --------------------------------------------------- 6.51s 2025-02-10 09:53:27.068248 | orchestrator | Create volume type local ------------------------------------------------ 5.77s 2025-02-10 09:53:27.068738 | orchestrator | Set public network to default ------------------------------------------- 5.49s 2025-02-10 09:53:27.070490 | orchestrator | Create public network --------------------------------------------------- 5.19s 2025-02-10 09:53:27.070581 | orchestrator | Create public subnet ---------------------------------------------------- 4.33s 2025-02-10 09:53:27.073566 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.79s 2025-02-10 09:53:27.073832 | orchestrator | Create manager role ----------------------------------------------------- 3.45s 2025-02-10 09:53:27.073867 | orchestrator | Gathering Facts --------------------------------------------------------- 1.62s 2025-02-10 09:53:27.074148 | orchestrator | Accept FORWARD on the management interface (incoming) ------------------- 0.70s 2025-02-10 09:53:27.074296 | orchestrator | Accept FORWARD on the management interface (outgoing) ------------------- 0.55s 2025-02-10 09:53:27.074595 | orchestrator | Masquerade traffic on the management interface -------------------------- 0.49s 2025-02-10 09:53:32.858758 | orchestrator | 2025-02-10 09:53:32 | INFO  | Processing image 'Cirros 0.6.2' 2025-02-10 09:53:33.079829 | orchestrator | 2025-02-10 09:53:33 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-02-10 09:53:34.797443 | orchestrator | 2025-02-10 09:53:33 | INFO  | Importing image Cirros 0.6.2 2025-02-10 09:53:34.797606 | orchestrator | 2025-02-10 09:53:33 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-02-10 09:53:34.797650 | orchestrator | 2025-02-10 09:53:34 | INFO  | Waiting for image to leave queued state... 2025-02-10 09:53:36.847591 | orchestrator | 2025-02-10 09:53:36 | INFO  | Waiting for import to complete... 2025-02-10 09:53:47.188693 | orchestrator | 2025-02-10 09:53:47 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-02-10 09:53:47.375379 | orchestrator | 2025-02-10 09:53:47 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-02-10 09:53:47.624834 | orchestrator | 2025-02-10 09:53:47 | INFO  | Setting internal_version = 0.6.2 2025-02-10 09:53:47.625038 | orchestrator | 2025-02-10 09:53:47 | INFO  | Setting image_original_user = cirros 2025-02-10 09:53:47.625062 | orchestrator | 2025-02-10 09:53:47 | INFO  | Adding tag os:cirros 2025-02-10 09:53:47.625098 | orchestrator | 2025-02-10 09:53:47 | INFO  | Setting property architecture: x86_64 2025-02-10 09:53:47.919738 | orchestrator | 2025-02-10 09:53:47 | INFO  | Setting property hw_disk_bus: scsi 2025-02-10 09:53:48.206899 | orchestrator | 2025-02-10 09:53:48 | INFO  | Setting property hw_rng_model: virtio 2025-02-10 09:53:48.411135 | orchestrator | 2025-02-10 09:53:48 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-02-10 09:53:48.662326 | orchestrator | 2025-02-10 09:53:48 | INFO  | Setting property hw_watchdog_action: reset 2025-02-10 09:53:48.878203 | orchestrator | 2025-02-10 09:53:48 | INFO  | Setting property hypervisor_type: qemu 2025-02-10 09:53:49.085506 | orchestrator | 2025-02-10 09:53:49 | INFO  | Setting property os_distro: cirros 2025-02-10 09:53:49.274786 | orchestrator | 2025-02-10 09:53:49 | INFO  | Setting property replace_frequency: never 2025-02-10 09:53:49.545313 | orchestrator | 2025-02-10 09:53:49 | INFO  | Setting property uuid_validity: none 2025-02-10 09:53:49.746392 | orchestrator | 2025-02-10 09:53:49 | INFO  | Setting property provided_until: none 2025-02-10 09:53:49.977464 | orchestrator | 2025-02-10 09:53:49 | INFO  | Setting property image_description: Cirros 2025-02-10 09:53:50.182061 | orchestrator | 2025-02-10 09:53:50 | INFO  | Setting property image_name: Cirros 2025-02-10 09:53:50.381689 | orchestrator | 2025-02-10 09:53:50 | INFO  | Setting property internal_version: 0.6.2 2025-02-10 09:53:50.569662 | orchestrator | 2025-02-10 09:53:50 | INFO  | Setting property image_original_user: cirros 2025-02-10 09:53:50.777605 | orchestrator | 2025-02-10 09:53:50 | INFO  | Setting property os_version: 0.6.2 2025-02-10 09:53:51.021808 | orchestrator | 2025-02-10 09:53:51 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-02-10 09:53:51.310582 | orchestrator | 2025-02-10 09:53:51 | INFO  | Setting property image_build_date: 2023-05-30 2025-02-10 09:53:51.552977 | orchestrator | 2025-02-10 09:53:51 | INFO  | Checking status of 'Cirros 0.6.2' 2025-02-10 09:53:51.960773 | orchestrator | 2025-02-10 09:53:51 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-02-10 09:53:51.960888 | orchestrator | 2025-02-10 09:53:51 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-02-10 09:53:51.960996 | orchestrator | 2025-02-10 09:53:51 | INFO  | Processing image 'Cirros 0.6.3' 2025-02-10 09:53:52.160030 | orchestrator | 2025-02-10 09:53:52 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-02-10 09:53:53.636878 | orchestrator | 2025-02-10 09:53:52 | INFO  | Importing image Cirros 0.6.3 2025-02-10 09:53:53.637043 | orchestrator | 2025-02-10 09:53:52 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-02-10 09:53:53.637082 | orchestrator | 2025-02-10 09:53:53 | INFO  | Waiting for image to leave queued state... 2025-02-10 09:53:55.673536 | orchestrator | 2025-02-10 09:53:55 | INFO  | Waiting for import to complete... 2025-02-10 09:54:05.845365 | orchestrator | 2025-02-10 09:54:05 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-02-10 09:54:06.102113 | orchestrator | 2025-02-10 09:54:06 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-02-10 09:54:06.374407 | orchestrator | 2025-02-10 09:54:06 | INFO  | Setting internal_version = 0.6.3 2025-02-10 09:54:06.374699 | orchestrator | 2025-02-10 09:54:06 | INFO  | Setting image_original_user = cirros 2025-02-10 09:54:06.374737 | orchestrator | 2025-02-10 09:54:06 | INFO  | Adding tag os:cirros 2025-02-10 09:54:06.374775 | orchestrator | 2025-02-10 09:54:06 | INFO  | Setting property architecture: x86_64 2025-02-10 09:54:06.712102 | orchestrator | 2025-02-10 09:54:06 | INFO  | Setting property hw_disk_bus: scsi 2025-02-10 09:54:06.947517 | orchestrator | 2025-02-10 09:54:06 | INFO  | Setting property hw_rng_model: virtio 2025-02-10 09:54:07.206655 | orchestrator | 2025-02-10 09:54:07 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-02-10 09:54:07.421121 | orchestrator | 2025-02-10 09:54:07 | INFO  | Setting property hw_watchdog_action: reset 2025-02-10 09:54:07.681632 | orchestrator | 2025-02-10 09:54:07 | INFO  | Setting property hypervisor_type: qemu 2025-02-10 09:54:07.895443 | orchestrator | 2025-02-10 09:54:07 | INFO  | Setting property os_distro: cirros 2025-02-10 09:54:08.158561 | orchestrator | 2025-02-10 09:54:08 | INFO  | Setting property replace_frequency: never 2025-02-10 09:54:08.381574 | orchestrator | 2025-02-10 09:54:08 | INFO  | Setting property uuid_validity: none 2025-02-10 09:54:08.630054 | orchestrator | 2025-02-10 09:54:08 | INFO  | Setting property provided_until: none 2025-02-10 09:54:08.871139 | orchestrator | 2025-02-10 09:54:08 | INFO  | Setting property image_description: Cirros 2025-02-10 09:54:09.097613 | orchestrator | 2025-02-10 09:54:09 | INFO  | Setting property image_name: Cirros 2025-02-10 09:54:09.339772 | orchestrator | 2025-02-10 09:54:09 | INFO  | Setting property internal_version: 0.6.3 2025-02-10 09:54:09.611710 | orchestrator | 2025-02-10 09:54:09 | INFO  | Setting property image_original_user: cirros 2025-02-10 09:54:09.856039 | orchestrator | 2025-02-10 09:54:09 | INFO  | Setting property os_version: 0.6.3 2025-02-10 09:54:10.092515 | orchestrator | 2025-02-10 09:54:10 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-02-10 09:54:10.323491 | orchestrator | 2025-02-10 09:54:10 | INFO  | Setting property image_build_date: 2024-09-26 2025-02-10 09:54:10.559190 | orchestrator | 2025-02-10 09:54:10 | INFO  | Checking status of 'Cirros 0.6.3' 2025-02-10 09:54:11.639829 | orchestrator | 2025-02-10 09:54:10 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-02-10 09:54:11.639956 | orchestrator | 2025-02-10 09:54:10 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-02-10 09:54:11.639980 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-02-10 09:54:13.444407 | orchestrator | 2025-02-10 09:54:13 | INFO  | date: 2025-02-10 2025-02-10 09:54:13.491289 | orchestrator | 2025-02-10 09:54:13 | INFO  | image: octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:54:13.491464 | orchestrator | 2025-02-10 09:54:13 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:54:13.491527 | orchestrator | 2025-02-10 09:54:13 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2.CHECKSUM 2025-02-10 09:54:13.491576 | orchestrator | 2025-02-10 09:54:13 | INFO  | checksum: 818d90cbc1a4e91780f1e125e5e94e12877510b54e6f64cd7dcd858ef37722f9 2025-02-10 09:54:15.940940 | orchestrator | 2025-02-10 09:54:15 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:54:15.957655 | orchestrator | 2025-02-10 09:54:15 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2: 200 2025-02-10 09:54:16.388822 | orchestrator | 2025-02-10 09:54:15 | INFO  | Importing image OpenStack Octavia Amphora 2025-02-10 2025-02-10 09:54:16.389021 | orchestrator | 2025-02-10 09:54:15 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:54:16.389065 | orchestrator | 2025-02-10 09:54:16 | INFO  | Waiting for image to leave queued state... 2025-02-10 09:54:18.437494 | orchestrator | 2025-02-10 09:54:18 | INFO  | Waiting for import to complete... 2025-02-10 09:54:28.539704 | orchestrator | 2025-02-10 09:54:28 | INFO  | Waiting for import to complete... 2025-02-10 09:54:38.638959 | orchestrator | 2025-02-10 09:54:38 | INFO  | Waiting for import to complete... 2025-02-10 09:54:48.777923 | orchestrator | 2025-02-10 09:54:48 | INFO  | Waiting for import to complete... 2025-02-10 09:54:58.916070 | orchestrator | 2025-02-10 09:54:58 | INFO  | Waiting for import to complete... 2025-02-10 09:55:09.052585 | orchestrator | 2025-02-10 09:55:09 | INFO  | Import of 'OpenStack Octavia Amphora 2025-02-10' successfully completed, reloading images 2025-02-10 09:55:09.398685 | orchestrator | 2025-02-10 09:55:09 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:55:09.576705 | orchestrator | 2025-02-10 09:55:09 | INFO  | Setting internal_version = 2025-02-10 2025-02-10 09:55:09.576828 | orchestrator | 2025-02-10 09:55:09 | INFO  | Setting image_original_user = ubuntu 2025-02-10 09:55:09.576912 | orchestrator | 2025-02-10 09:55:09 | INFO  | Adding tag amphora 2025-02-10 09:55:09.576948 | orchestrator | 2025-02-10 09:55:09 | INFO  | Adding tag os:ubuntu 2025-02-10 09:55:09.809656 | orchestrator | 2025-02-10 09:55:09 | INFO  | Setting property architecture: x86_64 2025-02-10 09:55:10.036775 | orchestrator | 2025-02-10 09:55:10 | INFO  | Setting property hw_disk_bus: scsi 2025-02-10 09:55:10.235885 | orchestrator | 2025-02-10 09:55:10 | INFO  | Setting property hw_rng_model: virtio 2025-02-10 09:55:10.479600 | orchestrator | 2025-02-10 09:55:10 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-02-10 09:55:10.683460 | orchestrator | 2025-02-10 09:55:10 | INFO  | Setting property hw_watchdog_action: reset 2025-02-10 09:55:10.893714 | orchestrator | 2025-02-10 09:55:10 | INFO  | Setting property hypervisor_type: qemu 2025-02-10 09:55:11.129062 | orchestrator | 2025-02-10 09:55:11 | INFO  | Setting property os_distro: ubuntu 2025-02-10 09:55:11.319051 | orchestrator | 2025-02-10 09:55:11 | INFO  | Setting property replace_frequency: quarterly 2025-02-10 09:55:11.505832 | orchestrator | 2025-02-10 09:55:11 | INFO  | Setting property uuid_validity: last-1 2025-02-10 09:55:11.759746 | orchestrator | 2025-02-10 09:55:11 | INFO  | Setting property provided_until: none 2025-02-10 09:55:11.972972 | orchestrator | 2025-02-10 09:55:11 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-02-10 09:55:12.180306 | orchestrator | 2025-02-10 09:55:12 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-02-10 09:55:12.376658 | orchestrator | 2025-02-10 09:55:12 | INFO  | Setting property internal_version: 2025-02-10 2025-02-10 09:55:12.599949 | orchestrator | 2025-02-10 09:55:12 | INFO  | Setting property image_original_user: ubuntu 2025-02-10 09:55:12.817262 | orchestrator | 2025-02-10 09:55:12 | INFO  | Setting property os_version: 2025-02-10 2025-02-10 09:55:13.057335 | orchestrator | 2025-02-10 09:55:13 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250210.qcow2 2025-02-10 09:55:13.263402 | orchestrator | 2025-02-10 09:55:13 | INFO  | Setting property image_build_date: 2025-02-10 2025-02-10 09:55:13.469746 | orchestrator | 2025-02-10 09:55:13 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:55:13.622957 | orchestrator | 2025-02-10 09:55:13 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-02-10' 2025-02-10 09:55:13.623138 | orchestrator | 2025-02-10 09:55:13 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-02-10 09:55:14.151948 | orchestrator | 2025-02-10 09:55:13 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-02-10 09:55:14.152069 | orchestrator | 2025-02-10 09:55:13 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-02-10 09:55:14.152083 | orchestrator | 2025-02-10 09:55:13 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-02-10 09:55:14.252004 | orchestrator | changed 2025-02-10 09:55:14.271372 | 2025-02-10 09:55:14.271496 | TASK [Run checks] 2025-02-10 09:55:14.982712 | orchestrator | + set -e 2025-02-10 09:55:14.983631 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:55:14.983706 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:55:14.983737 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:55:14.983804 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:55:14.983858 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:55:14.983876 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-10 09:55:14.983919 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-10 09:55:15.026437 | orchestrator | 2025-02-10 09:55:15.027494 | orchestrator | # CHECK 2025-02-10 09:55:15.027532 | orchestrator | 2025-02-10 09:55:15.027549 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 09:55:15.027566 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 09:55:15.027582 | orchestrator | + echo 2025-02-10 09:55:15.027596 | orchestrator | + echo '# CHECK' 2025-02-10 09:55:15.027611 | orchestrator | + echo 2025-02-10 09:55:15.027626 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:55:15.027649 | orchestrator | ++ semver latest 5.0.0 2025-02-10 09:55:15.084351 | orchestrator | 2025-02-10 09:55:17.076644 | orchestrator | ## Containers @ testbed-manager 2025-02-10 09:55:17.076800 | orchestrator | 2025-02-10 09:55:17.076811 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-10 09:55:17.076817 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-10 09:55:17.076823 | orchestrator | + echo 2025-02-10 09:55:17.076875 | orchestrator | + echo '## Containers @ testbed-manager' 2025-02-10 09:55:17.076888 | orchestrator | + echo 2025-02-10 09:55:17.076897 | orchestrator | + osism container testbed-manager ps 2025-02-10 09:55:17.076961 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:55:17.076975 | orchestrator | 91ba04090d94 nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_blackbox_exporter 2025-02-10 09:55:17.076985 | orchestrator | c01061be302e nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_alertmanager 2025-02-10 09:55:17.076998 | orchestrator | a517ebec7714 nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:55:17.077003 | orchestrator | a48563fdeb5d nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:55:17.077012 | orchestrator | 9abf19fd3923 nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2025-02-10 09:55:17.077019 | orchestrator | 154f5ce48970 quay.io/osism/cephclient:quincy "/usr/bin/dumb-init …" 20 minutes ago Up 19 minutes cephclient 2025-02-10 09:55:17.077026 | orchestrator | edebc9fe8b4b nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:55:17.077031 | orchestrator | aa1bf3985757 nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:55:17.077067 | orchestrator | be0dccccfd58 nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:55:17.077073 | orchestrator | cf3621f5ebfc phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 37 minutes ago Up 36 minutes (healthy) 80/tcp phpmyadmin 2025-02-10 09:55:17.077079 | orchestrator | b6c26a9f507a quay.io/osism/openstackclient:2024.1 "/usr/bin/dumb-init …" 37 minutes ago Up 37 minutes openstackclient 2025-02-10 09:55:17.077084 | orchestrator | 744d8bd35965 quay.io/osism/homer:v25.02.1 "/bin/sh /entrypoint…" 37 minutes ago Up 37 minutes (healthy) 8080/tcp homer 2025-02-10 09:55:17.077093 | orchestrator | 74288368459d ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-02-10 09:55:17.077099 | orchestrator | 239997934a13 quay.io/osism/nexus:3.76.1 "/opt/sonatype/nexus…" 59 minutes ago Up 58 minutes (healthy) 8081/tcp, 192.168.16.5:8191-8199->8191-8199/tcp nexus 2025-02-10 09:55:17.077112 | orchestrator | 368aaff811bb quay.io/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) kolla-ansible 2025-02-10 09:55:17.077118 | orchestrator | e4457be078c5 quay.io/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-kubernetes 2025-02-10 09:55:17.077124 | orchestrator | a12e6565519d quay.io/osism/ceph-ansible:quincy "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) ceph-ansible 2025-02-10 09:55:17.077132 | orchestrator | e716b3db8263 quay.io/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-ansible 2025-02-10 09:55:17.077138 | orchestrator | db1222aa8a18 quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up About an hour (healthy) 8000/tcp manager-ara-server-1 2025-02-10 09:55:17.077143 | orchestrator | 24fdf51b5cfa quay.io/osism/osism:latest "/usr/bin/tini -- sl…" About an hour ago Up About an hour (healthy) osismclient 2025-02-10 09:55:17.077149 | orchestrator | 881513622b96 quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-openstack-1 2025-02-10 09:55:17.077154 | orchestrator | 1408e005771a quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-flower-1 2025-02-10 09:55:17.077159 | orchestrator | ef85459a9b1a quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-conductor-1 2025-02-10 09:55:17.077169 | orchestrator | 88ad4ef2d1c4 quay.io/osism/osism-netbox:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-netbox-1 2025-02-10 09:55:17.077178 | orchestrator | d8f05e6edcdc mariadb:11.6.2 "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 3306/tcp manager-mariadb-1 2025-02-10 09:55:17.077183 | orchestrator | f0f589c191ac quay.io/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up About an hour (healthy) manager-inventory_reconciler-1 2025-02-10 09:55:17.077188 | orchestrator | 88fe37b79da9 quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-02-10 09:55:17.077194 | orchestrator | 097570d6795f redis:7.4.2-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp manager-redis-1 2025-02-10 09:55:17.077199 | orchestrator | 24e28d72e1ed quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-beat-1 2025-02-10 09:55:17.077204 | orchestrator | d1cfe86c2af8 quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-listener-1 2025-02-10 09:55:17.077218 | orchestrator | 55da172cf825 quay.io/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-watchdog-1 2025-02-10 09:55:17.320533 | orchestrator | ed89f0a9529e quay.io/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" About an hour ago Up About an hour (healthy) netbox-netbox-worker-1 2025-02-10 09:55:17.320684 | orchestrator | 6d87728d131b quay.io/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" About an hour ago Up About an hour (healthy) netbox-netbox-1 2025-02-10 09:55:17.320711 | orchestrator | cb385abcb97c postgres:16.6-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 5432/tcp netbox-postgres-1 2025-02-10 09:55:17.320727 | orchestrator | 877a93368e27 redis:7.4.2-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp netbox-redis-1 2025-02-10 09:55:17.320742 | orchestrator | d1e6a627c4b0 traefik:v3.3.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-02-10 09:55:17.320787 | orchestrator | 2025-02-10 09:55:19.285406 | orchestrator | ## Images @ testbed-manager 2025-02-10 09:55:19.285629 | orchestrator | 2025-02-10 09:55:19.285645 | orchestrator | + echo 2025-02-10 09:55:19.285657 | orchestrator | + echo '## Images @ testbed-manager' 2025-02-10 09:55:19.285668 | orchestrator | + echo 2025-02-10 09:55:19.285678 | orchestrator | + osism container testbed-manager images 2025-02-10 09:55:19.285730 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:55:19.545995 | orchestrator | quay.io/osism/osism-ansible latest 35015ed8f3c5 2 hours ago 928MB 2025-02-10 09:55:19.546211 | orchestrator | quay.io/osism/kolla-ansible 2024.1 2ae4845c2aaf 2 hours ago 574MB 2025-02-10 09:55:19.546233 | orchestrator | quay.io/osism/ceph-ansible quincy 2b4db49b3e5e 3 hours ago 495MB 2025-02-10 09:55:19.546248 | orchestrator | quay.io/osism/osism-netbox latest 6da91141cac5 3 hours ago 562MB 2025-02-10 09:55:19.546277 | orchestrator | quay.io/osism/osism latest 051b620fe7ce 3 hours ago 536MB 2025-02-10 09:55:19.546292 | orchestrator | quay.io/osism/homer v25.02.1 3429ddec2f50 7 hours ago 11MB 2025-02-10 09:55:19.546305 | orchestrator | quay.io/osism/cephclient quincy 56403e2d2a5e 7 hours ago 446MB 2025-02-10 09:55:19.546319 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 3160b419dff1 9 hours ago 537MB 2025-02-10 09:55:19.546333 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 aa58b6f7f75b 9 hours ago 642MB 2025-02-10 09:55:19.546361 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 81ea34048fa3 9 hours ago 266MB 2025-02-10 09:55:19.546376 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager 2024.1 2e55b19dd827 9 hours ago 400MB 2025-02-10 09:55:19.546390 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter 2024.1 88f9b04fff3f 9 hours ago 308MB 2025-02-10 09:55:19.546404 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server 2024.1 0b6735bc4a9f 9 hours ago 767MB 2025-02-10 09:55:19.546417 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 f0da5697abdd 9 hours ago 360MB 2025-02-10 09:55:19.546431 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 fb46e5a8211c 9 hours ago 305MB 2025-02-10 09:55:19.546444 | orchestrator | quay.io/osism/osism-kubernetes latest ca6bdc06700f 10 hours ago 967MB 2025-02-10 09:55:19.546458 | orchestrator | quay.io/osism/inventory-reconciler latest f59ec3d136e1 10 hours ago 269MB 2025-02-10 09:55:19.546472 | orchestrator | quay.io/osism/openstackclient 2024.1 2997541d3529 4 days ago 248MB 2025-02-10 09:55:19.546485 | orchestrator | postgres 16.6-alpine 5c773214aed7 6 days ago 275MB 2025-02-10 09:55:19.546500 | orchestrator | traefik v3.3.3 1c768f87626a 9 days ago 190MB 2025-02-10 09:55:19.546514 | orchestrator | hashicorp/vault 1.18.4 5a833a065801 11 days ago 485MB 2025-02-10 09:55:19.546528 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 2 weeks ago 571MB 2025-02-10 09:55:19.546542 | orchestrator | quay.io/osism/nexus 3.76.1 484c8f43d4b5 2 weeks ago 640MB 2025-02-10 09:55:19.546555 | orchestrator | redis 7.4.2-alpine ee33180a8437 4 weeks ago 41.4MB 2025-02-10 09:55:19.546569 | orchestrator | quay.io/osism/netbox v4.1.10 3d731b2d642c 6 weeks ago 761MB 2025-02-10 09:55:19.546612 | orchestrator | mariadb 11.6.2 027c25922bcd 2 months ago 415MB 2025-02-10 09:55:19.546626 | orchestrator | quay.io/osism/ara-server 1.7.2 bb44122eb176 5 months ago 300MB 2025-02-10 09:55:19.546640 | orchestrator | ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 8 months ago 146MB 2025-02-10 09:55:19.546676 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:55:19.606464 | orchestrator | ++ semver latest 5.0.0 2025-02-10 09:55:19.606619 | orchestrator | 2025-02-10 09:55:21.656320 | orchestrator | ## Containers @ testbed-node-0 2025-02-10 09:55:21.656524 | orchestrator | 2025-02-10 09:55:21.656558 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-10 09:55:21.656585 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-10 09:55:21.656609 | orchestrator | + echo 2025-02-10 09:55:21.656636 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-02-10 09:55:21.656685 | orchestrator | + echo 2025-02-10 09:55:21.656710 | orchestrator | + osism container testbed-node-0 ps 2025-02-10 09:55:21.656796 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:55:21.656904 | orchestrator | 5a278ba584f0 nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-02-10 09:55:21.656925 | orchestrator | eae25f4c5b56 nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-02-10 09:55:21.656940 | orchestrator | 434e2fcef646 nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-02-10 09:55:21.656955 | orchestrator | b631d3aae7c1 nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-02-10 09:55:21.656970 | orchestrator | 4bee60b69722 nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-02-10 09:55:21.656985 | orchestrator | 6de3447f169d nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_compute_ironic 2025-02-10 09:55:21.656999 | orchestrator | c2146226305b nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-02-10 09:55:21.657013 | orchestrator | 1b1eaa455f91 nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-02-10 09:55:21.657027 | orchestrator | 5b12f03a4d56 nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-02-10 09:55:21.657041 | orchestrator | 6aadfea4e0ec nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-02-10 09:55:21.657055 | orchestrator | 4cbfe43f6907 nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) glance_api 2025-02-10 09:55:21.657069 | orchestrator | fda115560ff0 nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-02-10 09:55:21.657101 | orchestrator | 31471d4367b6 nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-02-10 09:55:21.657148 | orchestrator | a32cc96ae775 nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-02-10 09:55:21.657178 | orchestrator | 3525e6f31b2d nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-02-10 09:55:21.657195 | orchestrator | 117ed62b218e nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:55:21.657230 | orchestrator | 3bbc9e5fe984 nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-02-10 09:55:21.657258 | orchestrator | 1327769545c3 nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_mysqld_exporter 2025-02-10 09:55:21.657282 | orchestrator | e323f42e8304 nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:55:21.657308 | orchestrator | f4f46d478ce9 nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) ironic_neutron_agent 2025-02-10 09:55:21.657351 | orchestrator | c5d4cd7bde77 nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-02-10 09:55:21.657377 | orchestrator | ddfe79bfa002 nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-02-10 09:55:21.657405 | orchestrator | 0a77ed2f9d95 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_http 2025-02-10 09:55:21.657428 | orchestrator | 8fe24a427fca nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-02-10 09:55:21.657448 | orchestrator | fb594a760d0a nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes ironic_tftp 2025-02-10 09:55:21.657467 | orchestrator | 54ba256edd24 nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_inspector 2025-02-10 09:55:21.657482 | orchestrator | a86d2e2ca1a0 nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_api 2025-02-10 09:55:21.657495 | orchestrator | 305722ab55db nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_conductor 2025-02-10 09:55:21.657509 | orchestrator | e515fb629b9c nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-02-10 09:55:21.657523 | orchestrator | 17487b83a456 nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2025-02-10 09:55:21.657537 | orchestrator | f5182ee9e2ab nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-02-10 09:55:21.657555 | orchestrator | d497771e31ba nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-02-10 09:55:21.657594 | orchestrator | 3ed962b054d6 nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_central 2025-02-10 09:55:21.657619 | orchestrator | 4d253828e012 nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_api 2025-02-10 09:55:21.657643 | orchestrator | 0b28c7bc434e nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_backend_bind9 2025-02-10 09:55:21.657665 | orchestrator | ba19ddb09d28 nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_worker 2025-02-10 09:55:21.657681 | orchestrator | 04dadc05a199 nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2025-02-10 09:55:21.657695 | orchestrator | e8e00e351412 nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2025-02-10 09:55:21.657718 | orchestrator | efe0642c923a nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-0 2025-02-10 09:55:21.657741 | orchestrator | 37e2b81ec759 nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) keystone 2025-02-10 09:55:21.657764 | orchestrator | 895713ce6722 nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_fernet 2025-02-10 09:55:21.657801 | orchestrator | dcd1a8c627a2 nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_ssh 2025-02-10 09:55:21.657849 | orchestrator | 3156d3772e29 nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (unhealthy) horizon 2025-02-10 09:55:21.657874 | orchestrator | 84efbabe9495 nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 24 minutes ago Up 24 minutes (healthy) mariadb 2025-02-10 09:55:21.657895 | orchestrator | 749b46b5b658 nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes mariadb_clustercheck 2025-02-10 09:55:21.657919 | orchestrator | a19379b93da8 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-0 2025-02-10 09:55:21.657942 | orchestrator | d21ae5af99b0 nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) opensearch_dashboards 2025-02-10 09:55:21.657965 | orchestrator | d69161d01a3f nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) opensearch 2025-02-10 09:55:21.657988 | orchestrator | 641166fa1390 nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes keepalived 2025-02-10 09:55:21.658011 | orchestrator | 07b1c9c11271 nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) haproxy 2025-02-10 09:55:21.658109 | orchestrator | 191b8e1c8435 nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_northd 2025-02-10 09:55:21.658125 | orchestrator | ccb928df79ef nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_sb_db 2025-02-10 09:55:21.658138 | orchestrator | 4484a4da621b nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 32 minutes ago Up 32 minutes ceph-mon-testbed-node-0 2025-02-10 09:55:21.658153 | orchestrator | 8befd528832e nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_nb_db 2025-02-10 09:55:21.658171 | orchestrator | 9a9e9696a5c6 nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes ovn_controller 2025-02-10 09:55:21.658185 | orchestrator | d9185a9911fe nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) rabbitmq 2025-02-10 09:55:21.658199 | orchestrator | 86f875f782b2 nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) openvswitch_vswitchd 2025-02-10 09:55:21.658213 | orchestrator | f1cc95d81d88 nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) openvswitch_db 2025-02-10 09:55:21.658227 | orchestrator | 765106c791e2 nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis_sentinel 2025-02-10 09:55:21.658240 | orchestrator | b7434e1185de nexus.testbed.osism.xyz:8193/kolla/redis:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis 2025-02-10 09:55:21.658255 | orchestrator | a4f3b0dc32bf nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) memcached 2025-02-10 09:55:21.658268 | orchestrator | 825cfeeb0cd0 nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:55:21.658282 | orchestrator | a544dfe8e50e nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:55:21.658333 | orchestrator | ed3214a2c6c6 nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:55:21.902393 | orchestrator | 2025-02-10 09:55:23.983242 | orchestrator | ## Images @ testbed-node-0 2025-02-10 09:55:23.990924 | orchestrator | 2025-02-10 09:55:23.991003 | orchestrator | + echo 2025-02-10 09:55:23.991014 | orchestrator | + echo '## Images @ testbed-node-0' 2025-02-10 09:55:23.991023 | orchestrator | + echo 2025-02-10 09:55:23.991032 | orchestrator | + osism container testbed-node-0 images 2025-02-10 09:55:23.991063 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:55:23.991073 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon quincy b5bdf5dd4daa 7 hours ago 1.38GB 2025-02-10 09:55:23.991082 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards 2024.1 48a23d775ace 9 hours ago 1.44GB 2025-02-10 09:55:23.991090 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch 2024.1 7cb8dcf5da7b 9 hours ago 1.48GB 2025-02-10 09:55:23.991099 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/grafana 2024.1 3a9c350934fe 9 hours ago 844MB 2025-02-10 09:55:23.991107 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/memcached 2024.1 a0bf3df1b122 9 hours ago 267MB 2025-02-10 09:55:23.991175 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 3160b419dff1 9 hours ago 537MB 2025-02-10 09:55:23.991186 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keepalived 2024.1 91c8a48c4e25 9 hours ago 277MB 2025-02-10 09:55:23.991197 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 aa58b6f7f75b 9 hours ago 642MB 2025-02-10 09:55:23.991206 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/rabbitmq 2024.1 700de7f78976 9 hours ago 323MB 2025-02-10 09:55:23.991216 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 81ea34048fa3 9 hours ago 266MB 2025-02-10 09:55:23.991225 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/haproxy 2024.1 77d60e1615e6 9 hours ago 273MB 2025-02-10 09:55:23.991234 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-inspector 2024.1 87dc5d18471c 9 hours ago 938MB 2025-02-10 09:55:23.991243 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/horizon 2024.1 4ee472650f35 9 hours ago 1.07GB 2025-02-10 09:55:23.991251 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server 2024.1 d114d61087cc 9 hours ago 279MB 2025-02-10 09:55:23.991259 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd 2024.1 0fdda2907482 9 hours ago 279MB 2025-02-10 09:55:23.991268 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter 2024.1 ba63d0144c1e 9 hours ago 297MB 2025-02-10 09:55:23.991279 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter 2024.1 d0bfadf5d329 9 hours ago 292MB 2025-02-10 09:55:23.991287 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 f0da5697abdd 9 hours ago 360MB 2025-02-10 09:55:23.991296 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter 2024.1 22a96182cb59 9 hours ago 295MB 2025-02-10 09:55:23.991305 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 fb46e5a8211c 9 hours ago 305MB 2025-02-10 09:55:23.991314 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-server 2024.1 d8519828720b 9 hours ago 452MB 2025-02-10 09:55:23.991324 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck 2024.1 bdf943174a62 9 hours ago 299MB 2025-02-10 09:55:23.991333 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis-sentinel 2024.1 0dcdc5468681 9 hours ago 271MB 2025-02-10 09:55:23.991342 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis 2024.1 55935f43a951 9 hours ago 272MB 2025-02-10 09:55:23.991351 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping 2024.1 248489794605 9 hours ago 946MB 2025-02-10 09:55:23.991361 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-worker 2024.1 c12d585eafc7 9 hours ago 946MB 2025-02-10 09:55:23.991371 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager 2024.1 ac16105db0a2 9 hours ago 946MB 2025-02-10 09:55:23.991381 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent 2024.1 3183486c3131 9 hours ago 967MB 2025-02-10 09:55:23.991391 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-api 2024.1 97a64ca35ced 9 hours ago 967MB 2025-02-10 09:55:23.991399 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy 2024.1 0d10c29a8b27 9 hours ago 1.22GB 2025-02-10 09:55:23.991424 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-conductor 2024.1 1d5c3e5d48bf 9 hours ago 1.12GB 2025-02-10 09:55:23.991434 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-scheduler 2024.1 cecd6dc60618 9 hours ago 1.12GB 2025-02-10 09:55:23.991443 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic 2024.1 2004cfcb830a 9 hours ago 1.13GB 2025-02-10 09:55:23.991471 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-api 2024.1 332193e39448 9 hours ago 1.12GB 2025-02-10 09:55:23.991494 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-pxe 2024.1 b868b3abedb3 9 hours ago 1.04GB 2025-02-10 09:55:23.991503 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-api 2024.1 542cd3e8462a 9 hours ago 979MB 2025-02-10 09:55:23.991511 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-conductor 2024.1 f7817066e520 9 hours ago 1.23GB 2025-02-10 09:55:23.991519 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator 2024.1 81100c811e33 9 hours ago 899MB 2025-02-10 09:55:23.991527 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-notifier 2024.1 02da4b929e1f 9 hours ago 899MB 2025-02-10 09:55:23.991535 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-listener 2024.1 c5483aba0736 9 hours ago 899MB 2025-02-10 09:55:23.991542 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-api 2024.1 beac376d07e2 9 hours ago 898MB 2025-02-10 09:55:23.991550 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/placement-api 2024.1 302292ce0f48 9 hours ago 901MB 2025-02-10 09:55:23.991559 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener 2024.1 6e5c2270a58f 9 hours ago 915MB 2025-02-10 09:55:23.991567 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-api 2024.1 e6065a757edc 9 hours ago 914MB 2025-02-10 09:55:23.991576 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-worker 2024.1 a3fa0a92cdb5 9 hours ago 915MB 2025-02-10 09:55:23.991584 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-api 2024.1 036860b319e9 9 hours ago 1.3GB 2025-02-10 09:55:23.991592 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler 2024.1 5a686eddc801 9 hours ago 1.3GB 2025-02-10 09:55:23.991600 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-mdns 2024.1 4fed93e023ee 9 hours ago 908MB 2025-02-10 09:55:23.991610 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-worker 2024.1 fbaa9f83cc85 9 hours ago 913MB 2025-02-10 09:55:23.991617 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-api 2024.1 14ec6e1352d2 9 hours ago 908MB 2025-02-10 09:55:23.991625 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-central 2024.1 05cc507ce7d0 9 hours ago 907MB 2025-02-10 09:55:23.991633 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-producer 2024.1 39e2df140ddd 9 hours ago 908MB 2025-02-10 09:55:23.991643 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9 2024.1 40d47e1402e8 9 hours ago 913MB 2025-02-10 09:55:23.991655 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/neutron-server 2024.1 f768a74f1ef8 9 hours ago 1.07GB 2025-02-10 09:55:23.991664 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent 2024.1 9c7af035d3f9 9 hours ago 1.06GB 2025-02-10 09:55:23.991673 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-fernet 2024.1 c59d26fb3a3e 9 hours ago 950MB 2025-02-10 09:55:23.991695 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone 2024.1 5444f82f453a 9 hours ago 974MB 2025-02-10 09:55:23.991704 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-ssh 2024.1 c0d83eca2fa1 9 hours ago 953MB 2025-02-10 09:55:23.991713 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/skyline-console 2024.1 cff4eb4bb90a 9 hours ago 982MB 2025-02-10 09:55:23.991722 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver 2024.1 fd6cea8d9833 9 hours ago 960MB 2025-02-10 09:55:23.991731 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/glance-api 2024.1 dac2b3219818 9 hours ago 1GB 2025-02-10 09:55:23.991766 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-api 2024.1 21e087f28819 9 hours ago 1.03GB 2025-02-10 09:55:23.991777 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-conductor 2024.1 78cf0f0a6413 9 hours ago 1.14GB 2025-02-10 09:55:23.991785 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-controller 2024.1 3b795eedb32c 9 hours ago 791MB 2025-02-10 09:55:23.991794 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server 2024.1 f1dfccc0e6e6 9 hours ago 790MB 2025-02-10 09:55:23.991802 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server 2024.1 fa69b567d8b5 9 hours ago 790MB 2025-02-10 09:55:23.991810 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-northd 2024.1 52c82c2c989a 9 hours ago 791MB 2025-02-10 09:55:23.991822 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/heat-engine 2024.1 76e33b7ed195 2 weeks ago 964MB 2025-02-10 09:55:23.991850 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/heat-api 2024.1 8f1e4a68f3f7 2 weeks ago 964MB 2025-02-10 09:55:23.991868 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn 2024.1 846e20c1ba8b 2 weeks ago 964MB 2025-02-10 09:55:24.301467 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:55:24.301879 | orchestrator | ++ semver latest 5.0.0 2025-02-10 09:55:24.354112 | orchestrator | 2025-02-10 09:55:26.446798 | orchestrator | ## Containers @ testbed-node-1 2025-02-10 09:55:26.447025 | orchestrator | 2025-02-10 09:55:26.447048 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-10 09:55:26.447063 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-10 09:55:26.447076 | orchestrator | + echo 2025-02-10 09:55:26.447091 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-02-10 09:55:26.447106 | orchestrator | + echo 2025-02-10 09:55:26.447121 | orchestrator | + osism container testbed-node-1 ps 2025-02-10 09:55:26.447157 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:55:26.448598 | orchestrator | bf2a133365f1 nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-02-10 09:55:26.448625 | orchestrator | fac5e08c5be9 nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-02-10 09:55:26.448643 | orchestrator | b297e2bf12b7 nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-02-10 09:55:26.448658 | orchestrator | a5a86c3979cf nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-02-10 09:55:26.448672 | orchestrator | 22ca61b7f1b8 nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-02-10 09:55:26.448686 | orchestrator | aa91bd544e7d nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_compute_ironic 2025-02-10 09:55:26.448700 | orchestrator | 54788937778b nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-02-10 09:55:26.448714 | orchestrator | 989724d88fba nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-02-10 09:55:26.448728 | orchestrator | 55ccb4fc24b8 nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-02-10 09:55:26.448777 | orchestrator | 0004a174e9a6 nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) glance_api 2025-02-10 09:55:26.448793 | orchestrator | 8e8a3417f83f nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-02-10 09:55:26.448806 | orchestrator | c4caab283328 nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-02-10 09:55:26.448820 | orchestrator | 71070b06a3b2 nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-02-10 09:55:26.448892 | orchestrator | 92ab05b8e592 nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-02-10 09:55:26.448925 | orchestrator | 26a2fdc2d209 nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-02-10 09:55:26.448941 | orchestrator | bee29e59613e nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:55:26.448955 | orchestrator | 53577dc258ea nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-02-10 09:55:26.448969 | orchestrator | 711d3e6f3c7f nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_mysqld_exporter 2025-02-10 09:55:26.448983 | orchestrator | 9d15b005d9ef nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:55:26.448996 | orchestrator | aa2bc8068823 nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) ironic_neutron_agent 2025-02-10 09:55:26.449028 | orchestrator | 5cc6fc1a58e8 nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-02-10 09:55:26.449043 | orchestrator | 3376d2791920 nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-02-10 09:55:26.449057 | orchestrator | 8b00bd1dc994 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_http 2025-02-10 09:55:26.449071 | orchestrator | 1eca953ddee7 nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-02-10 09:55:26.449085 | orchestrator | 92ce435abe32 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes ironic_tftp 2025-02-10 09:55:26.449099 | orchestrator | 63726c4c4bac nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_inspector 2025-02-10 09:55:26.449113 | orchestrator | 7b925ea54b2f nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_api 2025-02-10 09:55:26.449127 | orchestrator | aeda663d5a21 nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_conductor 2025-02-10 09:55:26.449149 | orchestrator | fe04a5d37b57 nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-02-10 09:55:26.449164 | orchestrator | 3b699ea44b92 nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2025-02-10 09:55:26.449177 | orchestrator | 4eb14d22bf51 nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-02-10 09:55:26.449191 | orchestrator | 5b9870dbb429 nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-02-10 09:55:26.449205 | orchestrator | 85909bf97b68 nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) designate_central 2025-02-10 09:55:26.449219 | orchestrator | cd93f5d19dac nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_api 2025-02-10 09:55:26.449232 | orchestrator | 8720f0ce5964 nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_backend_bind9 2025-02-10 09:55:26.449246 | orchestrator | 4ab28f1fd704 nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_worker 2025-02-10 09:55:26.449260 | orchestrator | a1c6c89ba5f3 nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2025-02-10 09:55:26.449274 | orchestrator | ef4575dfab6d nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_api 2025-02-10 09:55:26.449287 | orchestrator | a65f851296eb nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-1 2025-02-10 09:55:26.449301 | orchestrator | e4d90cdf5511 nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone 2025-02-10 09:55:26.449315 | orchestrator | 7a0276562cdb nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_fernet 2025-02-10 09:55:26.449358 | orchestrator | 6a62b2a0cfb3 nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (unhealthy) horizon 2025-02-10 09:55:26.449375 | orchestrator | 802077bf2b49 nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_ssh 2025-02-10 09:55:26.449389 | orchestrator | 5a037ceb3d78 nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 25 minutes ago Up 25 minutes (healthy) mariadb 2025-02-10 09:55:26.449403 | orchestrator | 6c5ad29f735c nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes mariadb_clustercheck 2025-02-10 09:55:26.449418 | orchestrator | eb9887610ed6 nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) opensearch_dashboards 2025-02-10 09:55:26.451131 | orchestrator | 86c280d0016f nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-1 2025-02-10 09:55:26.451166 | orchestrator | 70406c24c7e2 nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) opensearch 2025-02-10 09:55:26.451182 | orchestrator | 991ed8be03d5 nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes keepalived 2025-02-10 09:55:26.451203 | orchestrator | b46cb4ab6994 nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) haproxy 2025-02-10 09:55:26.451217 | orchestrator | f8969828846f nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1 "dumb-init --single-…" 32 minutes ago Up 31 minutes ovn_northd 2025-02-10 09:55:26.451231 | orchestrator | 06778884dee4 nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_sb_db 2025-02-10 09:55:26.451245 | orchestrator | 20ef0d984121 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 32 minutes ago Up 32 minutes ceph-mon-testbed-node-1 2025-02-10 09:55:26.451259 | orchestrator | 870528c49649 nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_nb_db 2025-02-10 09:55:26.451273 | orchestrator | 76a871560d53 nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes ovn_controller 2025-02-10 09:55:26.451287 | orchestrator | c9262a2b767b nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) rabbitmq 2025-02-10 09:55:26.451301 | orchestrator | c26a55a8dd1e nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) openvswitch_vswitchd 2025-02-10 09:55:26.451315 | orchestrator | a6f6de433d75 nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) openvswitch_db 2025-02-10 09:55:26.451329 | orchestrator | 75d0e5d085b5 nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis_sentinel 2025-02-10 09:55:26.451343 | orchestrator | 250c179d6288 nexus.testbed.osism.xyz:8193/kolla/redis:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis 2025-02-10 09:55:26.451357 | orchestrator | b21cdb4e8784 nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) memcached 2025-02-10 09:55:26.451371 | orchestrator | 6db7022282ec nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:55:26.451384 | orchestrator | 8a7f3937f06b nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:55:26.451416 | orchestrator | c45d52ce25a3 nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:55:26.724452 | orchestrator | 2025-02-10 09:55:28.684786 | orchestrator | ## Images @ testbed-node-1 2025-02-10 09:55:28.685057 | orchestrator | 2025-02-10 09:55:28.685095 | orchestrator | + echo 2025-02-10 09:55:28.685151 | orchestrator | + echo '## Images @ testbed-node-1' 2025-02-10 09:55:28.685181 | orchestrator | + echo 2025-02-10 09:55:28.685208 | orchestrator | + osism container testbed-node-1 images 2025-02-10 09:55:28.685262 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:55:28.685290 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon quincy b5bdf5dd4daa 7 hours ago 1.38GB 2025-02-10 09:55:28.685315 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards 2024.1 48a23d775ace 9 hours ago 1.44GB 2025-02-10 09:55:28.685340 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch 2024.1 7cb8dcf5da7b 9 hours ago 1.48GB 2025-02-10 09:55:28.685365 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/grafana 2024.1 3a9c350934fe 9 hours ago 844MB 2025-02-10 09:55:28.685391 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/memcached 2024.1 a0bf3df1b122 9 hours ago 267MB 2025-02-10 09:55:28.685416 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 3160b419dff1 9 hours ago 537MB 2025-02-10 09:55:28.685445 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keepalived 2024.1 91c8a48c4e25 9 hours ago 277MB 2025-02-10 09:55:28.685478 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 aa58b6f7f75b 9 hours ago 642MB 2025-02-10 09:55:28.685508 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 81ea34048fa3 9 hours ago 266MB 2025-02-10 09:55:28.685539 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/rabbitmq 2024.1 700de7f78976 9 hours ago 323MB 2025-02-10 09:55:28.685567 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/haproxy 2024.1 77d60e1615e6 9 hours ago 273MB 2025-02-10 09:55:28.685610 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-inspector 2024.1 87dc5d18471c 9 hours ago 938MB 2025-02-10 09:55:28.685637 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/horizon 2024.1 4ee472650f35 9 hours ago 1.07GB 2025-02-10 09:55:28.685663 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server 2024.1 d114d61087cc 9 hours ago 279MB 2025-02-10 09:55:28.685693 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd 2024.1 0fdda2907482 9 hours ago 279MB 2025-02-10 09:55:28.685720 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter 2024.1 ba63d0144c1e 9 hours ago 297MB 2025-02-10 09:55:28.685746 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter 2024.1 d0bfadf5d329 9 hours ago 292MB 2025-02-10 09:55:28.685773 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 f0da5697abdd 9 hours ago 360MB 2025-02-10 09:55:28.685799 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter 2024.1 22a96182cb59 9 hours ago 295MB 2025-02-10 09:55:28.685858 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 fb46e5a8211c 9 hours ago 305MB 2025-02-10 09:55:28.685885 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-server 2024.1 d8519828720b 9 hours ago 452MB 2025-02-10 09:55:28.685909 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck 2024.1 bdf943174a62 9 hours ago 299MB 2025-02-10 09:55:28.685933 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis-sentinel 2024.1 0dcdc5468681 9 hours ago 271MB 2025-02-10 09:55:28.685958 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis 2024.1 55935f43a951 9 hours ago 272MB 2025-02-10 09:55:28.685983 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping 2024.1 248489794605 9 hours ago 946MB 2025-02-10 09:55:28.686007 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-worker 2024.1 c12d585eafc7 9 hours ago 946MB 2025-02-10 09:55:28.686136 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager 2024.1 ac16105db0a2 9 hours ago 946MB 2025-02-10 09:55:28.686183 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent 2024.1 3183486c3131 9 hours ago 967MB 2025-02-10 09:55:28.686210 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-api 2024.1 97a64ca35ced 9 hours ago 967MB 2025-02-10 09:55:28.686233 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy 2024.1 0d10c29a8b27 9 hours ago 1.22GB 2025-02-10 09:55:28.686256 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-conductor 2024.1 1d5c3e5d48bf 9 hours ago 1.12GB 2025-02-10 09:55:28.686280 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-scheduler 2024.1 cecd6dc60618 9 hours ago 1.12GB 2025-02-10 09:55:28.686305 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic 2024.1 2004cfcb830a 9 hours ago 1.13GB 2025-02-10 09:55:28.686330 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-api 2024.1 332193e39448 9 hours ago 1.12GB 2025-02-10 09:55:28.686373 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-pxe 2024.1 b868b3abedb3 9 hours ago 1.04GB 2025-02-10 09:55:28.940533 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-api 2024.1 542cd3e8462a 9 hours ago 979MB 2025-02-10 09:55:28.940662 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-conductor 2024.1 f7817066e520 9 hours ago 1.23GB 2025-02-10 09:55:28.940680 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/placement-api 2024.1 302292ce0f48 9 hours ago 901MB 2025-02-10 09:55:28.940693 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener 2024.1 6e5c2270a58f 9 hours ago 915MB 2025-02-10 09:55:28.940704 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-api 2024.1 e6065a757edc 9 hours ago 914MB 2025-02-10 09:55:28.940716 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-worker 2024.1 a3fa0a92cdb5 9 hours ago 915MB 2025-02-10 09:55:28.940727 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-api 2024.1 036860b319e9 9 hours ago 1.3GB 2025-02-10 09:55:28.940739 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler 2024.1 5a686eddc801 9 hours ago 1.3GB 2025-02-10 09:55:28.940750 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-mdns 2024.1 4fed93e023ee 9 hours ago 908MB 2025-02-10 09:55:28.940761 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-worker 2024.1 fbaa9f83cc85 9 hours ago 913MB 2025-02-10 09:55:28.940772 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-api 2024.1 14ec6e1352d2 9 hours ago 908MB 2025-02-10 09:55:28.940783 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-central 2024.1 05cc507ce7d0 9 hours ago 907MB 2025-02-10 09:55:28.940794 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-producer 2024.1 39e2df140ddd 9 hours ago 908MB 2025-02-10 09:55:28.940805 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9 2024.1 40d47e1402e8 9 hours ago 913MB 2025-02-10 09:55:28.940816 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/neutron-server 2024.1 f768a74f1ef8 9 hours ago 1.07GB 2025-02-10 09:55:28.940880 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent 2024.1 9c7af035d3f9 9 hours ago 1.06GB 2025-02-10 09:55:28.940891 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-fernet 2024.1 c59d26fb3a3e 9 hours ago 950MB 2025-02-10 09:55:28.940902 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone 2024.1 5444f82f453a 9 hours ago 974MB 2025-02-10 09:55:28.940913 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-ssh 2024.1 c0d83eca2fa1 9 hours ago 953MB 2025-02-10 09:55:28.940924 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/glance-api 2024.1 dac2b3219818 9 hours ago 1GB 2025-02-10 09:55:28.940974 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-api 2024.1 21e087f28819 9 hours ago 1.03GB 2025-02-10 09:55:28.940985 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-conductor 2024.1 78cf0f0a6413 9 hours ago 1.14GB 2025-02-10 09:55:28.940997 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-controller 2024.1 3b795eedb32c 9 hours ago 791MB 2025-02-10 09:55:28.941009 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server 2024.1 f1dfccc0e6e6 9 hours ago 790MB 2025-02-10 09:55:28.941020 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server 2024.1 fa69b567d8b5 9 hours ago 790MB 2025-02-10 09:55:28.941031 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-northd 2024.1 52c82c2c989a 9 hours ago 791MB 2025-02-10 09:55:28.941060 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-10 09:55:28.941120 | orchestrator | ++ semver latest 5.0.0 2025-02-10 09:55:29.000527 | orchestrator | 2025-02-10 09:55:31.049280 | orchestrator | ## Containers @ testbed-node-2 2025-02-10 09:55:31.049505 | orchestrator | 2025-02-10 09:55:31.049535 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-10 09:55:31.049557 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-10 09:55:31.049578 | orchestrator | + echo 2025-02-10 09:55:31.049600 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-02-10 09:55:31.049623 | orchestrator | + echo 2025-02-10 09:55:31.049668 | orchestrator | + osism container testbed-node-2 ps 2025-02-10 09:55:31.049732 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-10 09:55:31.049756 | orchestrator | b729b9aa3f1a nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-02-10 09:55:31.049780 | orchestrator | da35baa83c3b nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-02-10 09:55:31.049804 | orchestrator | 2ab3766fed40 nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-02-10 09:55:31.049869 | orchestrator | 3b3a9512cbc1 nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1 "dumb-init --single-…" 5 minutes ago Up 4 minutes octavia_driver_agent 2025-02-10 09:55:31.049905 | orchestrator | 46f246c241a5 nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-02-10 09:55:31.049934 | orchestrator | 9d1d1a7da4bd nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_compute_ironic 2025-02-10 09:55:31.049961 | orchestrator | 07241aaa02a6 nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-02-10 09:55:31.049983 | orchestrator | aabaa60c6eb7 nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-02-10 09:55:31.050133 | orchestrator | 40cf43a8516a nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-02-10 09:55:31.050170 | orchestrator | 28d237c64168 nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) glance_api 2025-02-10 09:55:31.050193 | orchestrator | f7588394d4e4 nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-02-10 09:55:31.050251 | orchestrator | ba90138b0cac nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-02-10 09:55:31.050272 | orchestrator | 8699e240a6b7 nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-02-10 09:55:31.050343 | orchestrator | c90219651552 nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-02-10 09:55:31.050367 | orchestrator | efbd0c4e53f7 nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-02-10 09:55:31.050387 | orchestrator | 1c211619458f nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-02-10 09:55:31.050411 | orchestrator | 51de85d741d8 nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-02-10 09:55:31.050432 | orchestrator | db77cd0009e4 nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-02-10 09:55:31.050452 | orchestrator | af26295c9497 nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-02-10 09:55:31.050473 | orchestrator | a07242b6821a nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) ironic_neutron_agent 2025-02-10 09:55:31.050509 | orchestrator | f95b5aab32d3 nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-02-10 09:55:31.050532 | orchestrator | f0ec4769a93f nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-02-10 09:55:31.050554 | orchestrator | 87fc78fbf076 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) ironic_http 2025-02-10 09:55:31.050576 | orchestrator | df88724be5e1 nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2025-02-10 09:55:31.050598 | orchestrator | 06e125dd2289 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes ironic_tftp 2025-02-10 09:55:31.050631 | orchestrator | 25ff1df13531 nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_inspector 2025-02-10 09:55:31.050652 | orchestrator | 9cf62db57757 nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_api 2025-02-10 09:55:31.050676 | orchestrator | 2c2eaa26ffb7 nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) ironic_conductor 2025-02-10 09:55:31.050698 | orchestrator | 87c29b146540 nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2025-02-10 09:55:31.050734 | orchestrator | c2ae5538d9fc nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_worker 2025-02-10 09:55:31.050756 | orchestrator | 52c25c0c8568 nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_mdns 2025-02-10 09:55:31.050776 | orchestrator | 1340574a5a03 nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_producer 2025-02-10 09:55:31.050802 | orchestrator | d2cdfefecb4a nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_central 2025-02-10 09:55:31.050865 | orchestrator | 6b7791b6bf06 nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_api 2025-02-10 09:55:31.050896 | orchestrator | 25568e4f83cb nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_backend_bind9 2025-02-10 09:55:31.050925 | orchestrator | 58e6162f51d6 nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_worker 2025-02-10 09:55:31.050946 | orchestrator | 83be2f8feed0 nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) barbican_keystone_listener 2025-02-10 09:55:31.050969 | orchestrator | 89180ca8c4c1 nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) barbican_api 2025-02-10 09:55:31.050989 | orchestrator | 5d57fa833ae9 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-2 2025-02-10 09:55:31.051023 | orchestrator | 4086177b1217 nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone 2025-02-10 09:55:31.051049 | orchestrator | 876d501c7b87 nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_fernet 2025-02-10 09:55:31.051093 | orchestrator | 7d16a5d42da4 nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (unhealthy) horizon 2025-02-10 09:55:31.051118 | orchestrator | bdbe2b27a1f9 nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) keystone_ssh 2025-02-10 09:55:31.051165 | orchestrator | 10ce7d7c4e63 nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 25 minutes ago Up 25 minutes (healthy) mariadb 2025-02-10 09:55:31.051212 | orchestrator | 6b78334fe9f5 nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes mariadb_clustercheck 2025-02-10 09:55:31.051239 | orchestrator | b00eb6a3bb92 nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) opensearch_dashboards 2025-02-10 09:55:31.051265 | orchestrator | 7af17292e5a7 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 26 minutes ago Up 26 minutes ceph-crash-testbed-node-2 2025-02-10 09:55:31.051312 | orchestrator | 38e6e0b09917 nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) opensearch 2025-02-10 09:55:31.051368 | orchestrator | ee73b34efa8b nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes keepalived 2025-02-10 09:55:31.051403 | orchestrator | 0771f77957a6 nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) haproxy 2025-02-10 09:55:31.051459 | orchestrator | a6a262b85557 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 32 minutes ago Up 32 minutes ceph-mon-testbed-node-2 2025-02-10 09:55:31.051509 | orchestrator | f0922a863a8b nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_northd 2025-02-10 09:55:31.051541 | orchestrator | ddc3aa42ff08 nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_sb_db 2025-02-10 09:55:31.051582 | orchestrator | 58a299012a25 nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_nb_db 2025-02-10 09:55:31.051646 | orchestrator | b0bd16811cdf nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) rabbitmq 2025-02-10 09:55:31.051693 | orchestrator | 9842a5e751a6 nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes ovn_controller 2025-02-10 09:55:31.051716 | orchestrator | 464a88f2ce8f nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 35 minutes ago Up 34 minutes (healthy) openvswitch_vswitchd 2025-02-10 09:55:31.051737 | orchestrator | b4ea455bc999 nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) openvswitch_db 2025-02-10 09:55:31.051778 | orchestrator | a3898575714e nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis_sentinel 2025-02-10 09:55:31.051815 | orchestrator | 6900d5db99ae nexus.testbed.osism.xyz:8193/kolla/redis:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) redis 2025-02-10 09:55:31.051938 | orchestrator | b76bb8b84844 nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) memcached 2025-02-10 09:55:31.051973 | orchestrator | 052fd0d7ff5c nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes cron 2025-02-10 09:55:31.052015 | orchestrator | 3dee225f0736 nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes kolla_toolbox 2025-02-10 09:55:31.052087 | orchestrator | 3d2d3cd949ea nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 36 minutes ago Up 36 minutes fluentd 2025-02-10 09:55:31.300337 | orchestrator | 2025-02-10 09:55:33.403067 | orchestrator | ## Images @ testbed-node-2 2025-02-10 09:55:33.403170 | orchestrator | 2025-02-10 09:55:33.403186 | orchestrator | + echo 2025-02-10 09:55:33.403199 | orchestrator | + echo '## Images @ testbed-node-2' 2025-02-10 09:55:33.403213 | orchestrator | + echo 2025-02-10 09:55:33.403378 | orchestrator | + osism container testbed-node-2 images 2025-02-10 09:55:33.403447 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-10 09:55:33.403491 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon quincy b5bdf5dd4daa 7 hours ago 1.38GB 2025-02-10 09:55:33.403545 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards 2024.1 48a23d775ace 9 hours ago 1.44GB 2025-02-10 09:55:33.403567 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch 2024.1 7cb8dcf5da7b 9 hours ago 1.48GB 2025-02-10 09:55:33.403587 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/grafana 2024.1 3a9c350934fe 9 hours ago 844MB 2025-02-10 09:55:33.403606 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/memcached 2024.1 a0bf3df1b122 9 hours ago 267MB 2025-02-10 09:55:33.403626 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 3160b419dff1 9 hours ago 537MB 2025-02-10 09:55:33.403647 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keepalived 2024.1 91c8a48c4e25 9 hours ago 277MB 2025-02-10 09:55:33.403668 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 aa58b6f7f75b 9 hours ago 642MB 2025-02-10 09:55:33.403690 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 81ea34048fa3 9 hours ago 266MB 2025-02-10 09:55:33.403712 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/rabbitmq 2024.1 700de7f78976 9 hours ago 323MB 2025-02-10 09:55:33.403733 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/haproxy 2024.1 77d60e1615e6 9 hours ago 273MB 2025-02-10 09:55:33.403753 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-inspector 2024.1 87dc5d18471c 9 hours ago 938MB 2025-02-10 09:55:33.403774 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/horizon 2024.1 4ee472650f35 9 hours ago 1.07GB 2025-02-10 09:55:33.403795 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server 2024.1 d114d61087cc 9 hours ago 279MB 2025-02-10 09:55:33.403815 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd 2024.1 0fdda2907482 9 hours ago 279MB 2025-02-10 09:55:33.403864 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter 2024.1 ba63d0144c1e 9 hours ago 297MB 2025-02-10 09:55:33.403884 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter 2024.1 d0bfadf5d329 9 hours ago 292MB 2025-02-10 09:55:33.403904 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 f0da5697abdd 9 hours ago 360MB 2025-02-10 09:55:33.403924 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter 2024.1 22a96182cb59 9 hours ago 295MB 2025-02-10 09:55:33.403944 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 fb46e5a8211c 9 hours ago 305MB 2025-02-10 09:55:33.403964 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-server 2024.1 d8519828720b 9 hours ago 452MB 2025-02-10 09:55:33.403983 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck 2024.1 bdf943174a62 9 hours ago 299MB 2025-02-10 09:55:33.404003 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis-sentinel 2024.1 0dcdc5468681 9 hours ago 271MB 2025-02-10 09:55:33.404032 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis 2024.1 55935f43a951 9 hours ago 272MB 2025-02-10 09:55:33.404053 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping 2024.1 248489794605 9 hours ago 946MB 2025-02-10 09:55:33.404140 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-worker 2024.1 c12d585eafc7 9 hours ago 946MB 2025-02-10 09:55:33.404168 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager 2024.1 ac16105db0a2 9 hours ago 946MB 2025-02-10 09:55:33.404195 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent 2024.1 3183486c3131 9 hours ago 967MB 2025-02-10 09:55:33.404220 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-api 2024.1 97a64ca35ced 9 hours ago 967MB 2025-02-10 09:55:33.404261 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy 2024.1 0d10c29a8b27 9 hours ago 1.22GB 2025-02-10 09:55:33.404287 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-conductor 2024.1 1d5c3e5d48bf 9 hours ago 1.12GB 2025-02-10 09:55:33.404489 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-scheduler 2024.1 cecd6dc60618 9 hours ago 1.12GB 2025-02-10 09:55:33.404525 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic 2024.1 2004cfcb830a 9 hours ago 1.13GB 2025-02-10 09:55:33.404549 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-api 2024.1 332193e39448 9 hours ago 1.12GB 2025-02-10 09:55:33.404588 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-pxe 2024.1 b868b3abedb3 9 hours ago 1.04GB 2025-02-10 09:55:33.643410 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-api 2024.1 542cd3e8462a 9 hours ago 979MB 2025-02-10 09:55:33.643529 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-conductor 2024.1 f7817066e520 9 hours ago 1.23GB 2025-02-10 09:55:33.643549 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/placement-api 2024.1 302292ce0f48 9 hours ago 901MB 2025-02-10 09:55:33.643564 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener 2024.1 6e5c2270a58f 9 hours ago 915MB 2025-02-10 09:55:33.643579 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-api 2024.1 e6065a757edc 9 hours ago 914MB 2025-02-10 09:55:33.643595 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-worker 2024.1 a3fa0a92cdb5 9 hours ago 915MB 2025-02-10 09:55:33.643619 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-api 2024.1 036860b319e9 9 hours ago 1.3GB 2025-02-10 09:55:33.643641 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler 2024.1 5a686eddc801 9 hours ago 1.3GB 2025-02-10 09:55:33.643665 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-mdns 2024.1 4fed93e023ee 9 hours ago 908MB 2025-02-10 09:55:33.643687 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-worker 2024.1 fbaa9f83cc85 9 hours ago 913MB 2025-02-10 09:55:33.643711 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-api 2024.1 14ec6e1352d2 9 hours ago 908MB 2025-02-10 09:55:33.643734 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-central 2024.1 05cc507ce7d0 9 hours ago 907MB 2025-02-10 09:55:33.643758 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-producer 2024.1 39e2df140ddd 9 hours ago 908MB 2025-02-10 09:55:33.643783 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9 2024.1 40d47e1402e8 9 hours ago 913MB 2025-02-10 09:55:33.643800 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/neutron-server 2024.1 f768a74f1ef8 9 hours ago 1.07GB 2025-02-10 09:55:33.643815 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent 2024.1 9c7af035d3f9 9 hours ago 1.06GB 2025-02-10 09:55:33.643891 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-fernet 2024.1 c59d26fb3a3e 9 hours ago 950MB 2025-02-10 09:55:33.643908 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone 2024.1 5444f82f453a 9 hours ago 974MB 2025-02-10 09:55:33.643922 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-ssh 2024.1 c0d83eca2fa1 9 hours ago 953MB 2025-02-10 09:55:33.643939 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/glance-api 2024.1 dac2b3219818 9 hours ago 1GB 2025-02-10 09:55:33.643955 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-api 2024.1 21e087f28819 9 hours ago 1.03GB 2025-02-10 09:55:33.643971 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-conductor 2024.1 78cf0f0a6413 9 hours ago 1.14GB 2025-02-10 09:55:33.644010 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-controller 2024.1 3b795eedb32c 9 hours ago 791MB 2025-02-10 09:55:33.644026 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server 2024.1 f1dfccc0e6e6 9 hours ago 790MB 2025-02-10 09:55:33.644041 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server 2024.1 fa69b567d8b5 9 hours ago 790MB 2025-02-10 09:55:33.644056 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-northd 2024.1 52c82c2c989a 9 hours ago 791MB 2025-02-10 09:55:33.644090 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-02-10 09:55:33.650237 | orchestrator | + set -e 2025-02-10 09:55:33.650996 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 09:55:33.651039 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 09:55:33.660404 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 09:55:33.660468 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 09:55:33.660483 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 09:55:33.660496 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 09:55:33.660512 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 09:55:33.660526 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 09:55:33.660540 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 09:55:33.660554 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 09:55:33.660568 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 09:55:33.660582 | orchestrator | ++ export ARA=false 2025-02-10 09:55:33.660595 | orchestrator | ++ ARA=false 2025-02-10 09:55:33.660610 | orchestrator | ++ export TEMPEST=false 2025-02-10 09:55:33.660623 | orchestrator | ++ TEMPEST=false 2025-02-10 09:55:33.660637 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 09:55:33.660650 | orchestrator | ++ IS_ZUUL=true 2025-02-10 09:55:33.660664 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 09:55:33.660678 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 09:55:33.660707 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 09:55:33.660721 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 09:55:33.660735 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 09:55:33.660748 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 09:55:33.660762 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 09:55:33.660776 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 09:55:33.660789 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 09:55:33.660803 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 09:55:33.660846 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-10 09:55:33.660872 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-02-10 09:55:33.660910 | orchestrator | + set -e 2025-02-10 09:55:33.661636 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:55:33.661677 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:55:33.661695 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:55:33.661709 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:55:33.661723 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:55:33.661737 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-10 09:55:33.661760 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-10 09:55:33.698743 | orchestrator | 2025-02-10 09:55:34.435182 | orchestrator | # Ceph status 2025-02-10 09:55:34.435311 | orchestrator | 2025-02-10 09:55:34.435330 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 09:55:34.435366 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 09:55:34.435381 | orchestrator | + echo 2025-02-10 09:55:34.435395 | orchestrator | + echo '# Ceph status' 2025-02-10 09:55:34.435409 | orchestrator | + echo 2025-02-10 09:55:34.435423 | orchestrator | + ceph -s 2025-02-10 09:55:34.435463 | orchestrator | cluster: 2025-02-10 09:55:34.469411 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-02-10 09:55:34.469504 | orchestrator | health: HEALTH_OK 2025-02-10 09:55:34.469516 | orchestrator | 2025-02-10 09:55:34.469526 | orchestrator | services: 2025-02-10 09:55:34.469535 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 32m) 2025-02-10 09:55:34.469545 | orchestrator | mgr: testbed-node-0(active, since 19m), standbys: testbed-node-1, testbed-node-2 2025-02-10 09:55:34.469554 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-02-10 09:55:34.469562 | orchestrator | osd: 6 osds: 6 up (since 28m), 6 in (since 29m) 2025-02-10 09:55:34.469571 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-02-10 09:55:34.469579 | orchestrator | 2025-02-10 09:55:34.469587 | orchestrator | data: 2025-02-10 09:55:34.469616 | orchestrator | volumes: 1/1 healthy 2025-02-10 09:55:34.469625 | orchestrator | pools: 14 pools, 401 pgs 2025-02-10 09:55:34.469633 | orchestrator | objects: 519 objects, 2.2 GiB 2025-02-10 09:55:34.469641 | orchestrator | usage: 8.4 GiB used, 111 GiB / 120 GiB avail 2025-02-10 09:55:34.469649 | orchestrator | pgs: 401 active+clean 2025-02-10 09:55:34.469657 | orchestrator | 2025-02-10 09:55:34.469677 | orchestrator | 2025-02-10 09:55:35.084409 | orchestrator | # Ceph versions 2025-02-10 09:55:35.084500 | orchestrator | 2025-02-10 09:55:35.084508 | orchestrator | + echo 2025-02-10 09:55:35.084513 | orchestrator | + echo '# Ceph versions' 2025-02-10 09:55:35.084519 | orchestrator | + echo 2025-02-10 09:55:35.084524 | orchestrator | + ceph versions 2025-02-10 09:55:35.084541 | orchestrator | { 2025-02-10 09:55:35.116246 | orchestrator | "mon": { 2025-02-10 09:55:35.116321 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:55:35.116328 | orchestrator | }, 2025-02-10 09:55:35.116334 | orchestrator | "mgr": { 2025-02-10 09:55:35.116340 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:55:35.116345 | orchestrator | }, 2025-02-10 09:55:35.116350 | orchestrator | "osd": { 2025-02-10 09:55:35.116355 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 6 2025-02-10 09:55:35.116360 | orchestrator | }, 2025-02-10 09:55:35.116365 | orchestrator | "mds": { 2025-02-10 09:55:35.116371 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:55:35.116375 | orchestrator | }, 2025-02-10 09:55:35.116380 | orchestrator | "rgw": { 2025-02-10 09:55:35.116385 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-02-10 09:55:35.116390 | orchestrator | }, 2025-02-10 09:55:35.116395 | orchestrator | "overall": { 2025-02-10 09:55:35.116400 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 18 2025-02-10 09:55:35.116405 | orchestrator | } 2025-02-10 09:55:35.116410 | orchestrator | } 2025-02-10 09:55:35.116427 | orchestrator | 2025-02-10 09:55:35.611144 | orchestrator | # Ceph OSD tree 2025-02-10 09:55:35.611250 | orchestrator | 2025-02-10 09:55:35.611258 | orchestrator | + echo 2025-02-10 09:55:35.611264 | orchestrator | + echo '# Ceph OSD tree' 2025-02-10 09:55:35.611269 | orchestrator | + echo 2025-02-10 09:55:35.611275 | orchestrator | + ceph osd df tree 2025-02-10 09:55:35.611295 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-02-10 09:55:35.645537 | orchestrator | -1 0.11691 - 120 GiB 8.4 GiB 6.7 GiB 0 B 1.7 GiB 111 GiB 7.01 1.00 - root default 2025-02-10 09:55:35.645634 | orchestrator | -3 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 592 MiB 37 GiB 7.01 1.00 - host testbed-node-3 2025-02-10 09:55:35.645644 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.3 GiB 0 B 298 MiB 18 GiB 8.07 1.15 204 up osd.1 2025-02-10 09:55:35.645653 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 922 MiB 0 B 294 MiB 19 GiB 5.94 0.85 186 up osd.4 2025-02-10 09:55:35.645662 | orchestrator | -7 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-4 2025-02-10 09:55:35.645670 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.2 GiB 0 B 298 MiB 18 GiB 7.68 1.10 191 up osd.2 2025-02-10 09:55:35.645678 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1002 MiB 0 B 298 MiB 19 GiB 6.36 0.91 197 up osd.5 2025-02-10 09:55:35.645686 | orchestrator | -5 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 592 MiB 37 GiB 7.01 1.00 - host testbed-node-5 2025-02-10 09:55:35.645694 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 938 MiB 0 B 294 MiB 19 GiB 6.02 0.86 174 up osd.0 2025-02-10 09:55:35.645702 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.3 GiB 0 B 298 MiB 18 GiB 8.00 1.14 218 up osd.3 2025-02-10 09:55:35.645710 | orchestrator | TOTAL 120 GiB 8.4 GiB 6.7 GiB 0 B 1.7 GiB 111 GiB 7.01 2025-02-10 09:55:35.645718 | orchestrator | MIN/MAX VAR: 0.85/1.15 STDDEV: 0.92 2025-02-10 09:55:35.645776 | orchestrator | 2025-02-10 09:55:36.266329 | orchestrator | # Ceph monitor status 2025-02-10 09:55:36.266473 | orchestrator | 2025-02-10 09:55:36.266492 | orchestrator | + echo 2025-02-10 09:55:36.266506 | orchestrator | + echo '# Ceph monitor status' 2025-02-10 09:55:36.266518 | orchestrator | + echo 2025-02-10 09:55:36.266531 | orchestrator | + ceph mon stat 2025-02-10 09:55:36.266576 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {1}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-02-10 09:55:36.295667 | orchestrator | 2025-02-10 09:55:36.296199 | orchestrator | # Ceph quorum status 2025-02-10 09:55:36.296234 | orchestrator | 2025-02-10 09:55:36.296248 | orchestrator | + echo 2025-02-10 09:55:36.296261 | orchestrator | + echo '# Ceph quorum status' 2025-02-10 09:55:36.296273 | orchestrator | + echo 2025-02-10 09:55:36.296293 | orchestrator | + ceph quorum_status 2025-02-10 09:55:36.297677 | orchestrator | + jq 2025-02-10 09:55:36.906303 | orchestrator | { 2025-02-10 09:55:36.907423 | orchestrator | "election_epoch": 6, 2025-02-10 09:55:36.907465 | orchestrator | "quorum": [ 2025-02-10 09:55:36.907481 | orchestrator | 0, 2025-02-10 09:55:36.907495 | orchestrator | 1, 2025-02-10 09:55:36.907510 | orchestrator | 2 2025-02-10 09:55:36.907525 | orchestrator | ], 2025-02-10 09:55:36.907539 | orchestrator | "quorum_names": [ 2025-02-10 09:55:36.907565 | orchestrator | "testbed-node-0", 2025-02-10 09:55:36.907579 | orchestrator | "testbed-node-1", 2025-02-10 09:55:36.907593 | orchestrator | "testbed-node-2" 2025-02-10 09:55:36.907607 | orchestrator | ], 2025-02-10 09:55:36.907621 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-02-10 09:55:36.907637 | orchestrator | "quorum_age": 1939, 2025-02-10 09:55:36.907651 | orchestrator | "features": { 2025-02-10 09:55:36.907665 | orchestrator | "quorum_con": "4540138320759226367", 2025-02-10 09:55:36.907679 | orchestrator | "quorum_mon": [ 2025-02-10 09:55:36.907693 | orchestrator | "kraken", 2025-02-10 09:55:36.907707 | orchestrator | "luminous", 2025-02-10 09:55:36.907721 | orchestrator | "mimic", 2025-02-10 09:55:36.907735 | orchestrator | "osdmap-prune", 2025-02-10 09:55:36.907749 | orchestrator | "nautilus", 2025-02-10 09:55:36.907762 | orchestrator | "octopus", 2025-02-10 09:55:36.907776 | orchestrator | "pacific", 2025-02-10 09:55:36.907790 | orchestrator | "elector-pinging", 2025-02-10 09:55:36.907804 | orchestrator | "quincy" 2025-02-10 09:55:36.907874 | orchestrator | ] 2025-02-10 09:55:36.907890 | orchestrator | }, 2025-02-10 09:55:36.907904 | orchestrator | "monmap": { 2025-02-10 09:55:36.907918 | orchestrator | "epoch": 1, 2025-02-10 09:55:36.907933 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-02-10 09:55:36.907947 | orchestrator | "modified": "2025-02-10T09:22:30.578818Z", 2025-02-10 09:55:36.907961 | orchestrator | "created": "2025-02-10T09:22:30.578818Z", 2025-02-10 09:55:36.907975 | orchestrator | "min_mon_release": 17, 2025-02-10 09:55:36.907990 | orchestrator | "min_mon_release_name": "quincy", 2025-02-10 09:55:36.908004 | orchestrator | "election_strategy": 1, 2025-02-10 09:55:36.908018 | orchestrator | "disallowed_leaders: ": "", 2025-02-10 09:55:36.908032 | orchestrator | "stretch_mode": false, 2025-02-10 09:55:36.908046 | orchestrator | "tiebreaker_mon": "", 2025-02-10 09:55:36.908059 | orchestrator | "removed_ranks: ": "1", 2025-02-10 09:55:36.908073 | orchestrator | "features": { 2025-02-10 09:55:36.908087 | orchestrator | "persistent": [ 2025-02-10 09:55:36.908101 | orchestrator | "kraken", 2025-02-10 09:55:36.908115 | orchestrator | "luminous", 2025-02-10 09:55:36.908128 | orchestrator | "mimic", 2025-02-10 09:55:36.908142 | orchestrator | "osdmap-prune", 2025-02-10 09:55:36.908155 | orchestrator | "nautilus", 2025-02-10 09:55:36.908169 | orchestrator | "octopus", 2025-02-10 09:55:36.908183 | orchestrator | "pacific", 2025-02-10 09:55:36.908196 | orchestrator | "elector-pinging", 2025-02-10 09:55:36.908210 | orchestrator | "quincy" 2025-02-10 09:55:36.908224 | orchestrator | ], 2025-02-10 09:55:36.908238 | orchestrator | "optional": [] 2025-02-10 09:55:36.908252 | orchestrator | }, 2025-02-10 09:55:36.908265 | orchestrator | "mons": [ 2025-02-10 09:55:36.908279 | orchestrator | { 2025-02-10 09:55:36.908293 | orchestrator | "rank": 0, 2025-02-10 09:55:36.908307 | orchestrator | "name": "testbed-node-0", 2025-02-10 09:55:36.908321 | orchestrator | "public_addrs": { 2025-02-10 09:55:36.908335 | orchestrator | "addrvec": [ 2025-02-10 09:55:36.908349 | orchestrator | { 2025-02-10 09:55:36.908363 | orchestrator | "type": "v2", 2025-02-10 09:55:36.908397 | orchestrator | "addr": "192.168.16.10:3300", 2025-02-10 09:55:36.908411 | orchestrator | "nonce": 0 2025-02-10 09:55:36.908425 | orchestrator | }, 2025-02-10 09:55:36.908439 | orchestrator | { 2025-02-10 09:55:36.908453 | orchestrator | "type": "v1", 2025-02-10 09:55:36.908472 | orchestrator | "addr": "192.168.16.10:6789", 2025-02-10 09:55:36.908486 | orchestrator | "nonce": 0 2025-02-10 09:55:36.908500 | orchestrator | } 2025-02-10 09:55:36.908514 | orchestrator | ] 2025-02-10 09:55:36.908528 | orchestrator | }, 2025-02-10 09:55:36.908542 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-02-10 09:55:36.908556 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-02-10 09:55:36.908570 | orchestrator | "priority": 0, 2025-02-10 09:55:36.908584 | orchestrator | "weight": 0, 2025-02-10 09:55:36.908597 | orchestrator | "crush_location": "{}" 2025-02-10 09:55:36.908611 | orchestrator | }, 2025-02-10 09:55:36.908625 | orchestrator | { 2025-02-10 09:55:36.908639 | orchestrator | "rank": 1, 2025-02-10 09:55:36.908653 | orchestrator | "name": "testbed-node-1", 2025-02-10 09:55:36.908667 | orchestrator | "public_addrs": { 2025-02-10 09:55:36.908732 | orchestrator | "addrvec": [ 2025-02-10 09:55:36.908749 | orchestrator | { 2025-02-10 09:55:36.908763 | orchestrator | "type": "v2", 2025-02-10 09:55:36.908777 | orchestrator | "addr": "192.168.16.11:3300", 2025-02-10 09:55:36.908791 | orchestrator | "nonce": 0 2025-02-10 09:55:36.908808 | orchestrator | }, 2025-02-10 09:55:36.908854 | orchestrator | { 2025-02-10 09:55:36.908869 | orchestrator | "type": "v1", 2025-02-10 09:55:36.908883 | orchestrator | "addr": "192.168.16.11:6789", 2025-02-10 09:55:36.908897 | orchestrator | "nonce": 0 2025-02-10 09:55:36.908910 | orchestrator | } 2025-02-10 09:55:36.908924 | orchestrator | ] 2025-02-10 09:55:36.908938 | orchestrator | }, 2025-02-10 09:55:36.908952 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-02-10 09:55:36.908966 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-02-10 09:55:36.909015 | orchestrator | "priority": 0, 2025-02-10 09:55:36.909030 | orchestrator | "weight": 0, 2025-02-10 09:55:36.909043 | orchestrator | "crush_location": "{}" 2025-02-10 09:55:36.909057 | orchestrator | }, 2025-02-10 09:55:36.909071 | orchestrator | { 2025-02-10 09:55:36.909085 | orchestrator | "rank": 2, 2025-02-10 09:55:36.909099 | orchestrator | "name": "testbed-node-2", 2025-02-10 09:55:36.909113 | orchestrator | "public_addrs": { 2025-02-10 09:55:36.909127 | orchestrator | "addrvec": [ 2025-02-10 09:55:36.909141 | orchestrator | { 2025-02-10 09:55:36.909155 | orchestrator | "type": "v2", 2025-02-10 09:55:36.909169 | orchestrator | "addr": "192.168.16.12:3300", 2025-02-10 09:55:36.909183 | orchestrator | "nonce": 0 2025-02-10 09:55:36.909197 | orchestrator | }, 2025-02-10 09:55:36.909210 | orchestrator | { 2025-02-10 09:55:36.909224 | orchestrator | "type": "v1", 2025-02-10 09:55:36.909238 | orchestrator | "addr": "192.168.16.12:6789", 2025-02-10 09:55:36.909252 | orchestrator | "nonce": 0 2025-02-10 09:55:36.909265 | orchestrator | } 2025-02-10 09:55:36.909279 | orchestrator | ] 2025-02-10 09:55:36.909293 | orchestrator | }, 2025-02-10 09:55:36.909306 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-02-10 09:55:36.909320 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-02-10 09:55:36.909334 | orchestrator | "priority": 0, 2025-02-10 09:55:36.909348 | orchestrator | "weight": 0, 2025-02-10 09:55:36.909361 | orchestrator | "crush_location": "{}" 2025-02-10 09:55:36.909375 | orchestrator | } 2025-02-10 09:55:36.909389 | orchestrator | ] 2025-02-10 09:55:36.909402 | orchestrator | } 2025-02-10 09:55:36.909416 | orchestrator | } 2025-02-10 09:55:36.909441 | orchestrator | 2025-02-10 09:55:37.485933 | orchestrator | # Ceph free space status 2025-02-10 09:55:37.486148 | orchestrator | 2025-02-10 09:55:37.486173 | orchestrator | + echo 2025-02-10 09:55:37.486189 | orchestrator | + echo '# Ceph free space status' 2025-02-10 09:55:37.486203 | orchestrator | + echo 2025-02-10 09:55:37.486218 | orchestrator | + ceph df 2025-02-10 09:55:37.486251 | orchestrator | --- RAW STORAGE --- 2025-02-10 09:55:37.521386 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-02-10 09:55:37.521522 | orchestrator | hdd 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.01 2025-02-10 09:55:37.521549 | orchestrator | TOTAL 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.01 2025-02-10 09:55:37.521572 | orchestrator | 2025-02-10 09:55:37.521596 | orchestrator | --- POOLS --- 2025-02-10 09:55:37.521619 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-02-10 09:55:37.521684 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-02-10 09:55:37.521710 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:55:37.521736 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-02-10 09:55:37.521759 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:55:37.521782 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:55:37.521805 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-02-10 09:55:37.521862 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-02-10 09:55:37.521887 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-02-10 09:55:37.521912 | orchestrator | .rgw.root 9 32 3.7 KiB 8 64 KiB 0 52 GiB 2025-02-10 09:55:37.521935 | orchestrator | backups 10 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:55:37.521995 | orchestrator | volumes 11 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:55:37.522110 | orchestrator | images 12 32 2.2 GiB 298 6.7 GiB 6.00 35 GiB 2025-02-10 09:55:37.522146 | orchestrator | metrics 13 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:55:37.522162 | orchestrator | vms 14 32 19 B 1 12 KiB 0 35 GiB 2025-02-10 09:55:37.522197 | orchestrator | ++ semver latest 5.0.0 2025-02-10 09:55:37.566344 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-10 09:55:39.337224 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-10 09:55:39.337326 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-02-10 09:55:39.337337 | orchestrator | + osism apply facts 2025-02-10 09:55:39.337359 | orchestrator | 2025-02-10 09:55:39 | INFO  | Task aefa3854-ab55-47f2-aa6d-820c3eeab9ab (facts) was prepared for execution. 2025-02-10 09:55:43.309120 | orchestrator | 2025-02-10 09:55:39 | INFO  | It takes a moment until task aefa3854-ab55-47f2-aa6d-820c3eeab9ab (facts) has been started and output is visible here. 2025-02-10 09:55:43.309316 | orchestrator | 2025-02-10 09:55:43.309893 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-10 09:55:43.309952 | orchestrator | 2025-02-10 09:55:43.310407 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-10 09:55:43.314000 | orchestrator | Monday 10 February 2025 09:55:43 +0000 (0:00:00.255) 0:00:00.255 ******* 2025-02-10 09:55:44.125943 | orchestrator | ok: [testbed-manager] 2025-02-10 09:55:44.943971 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:55:44.948338 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:55:44.948410 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:55:44.948559 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:55:44.950362 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:55:44.953473 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:55:45.159617 | orchestrator | 2025-02-10 09:55:45.159730 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-10 09:55:45.159745 | orchestrator | Monday 10 February 2025 09:55:44 +0000 (0:00:01.630) 0:00:01.886 ******* 2025-02-10 09:55:45.159771 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:55:45.266278 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:55:45.401641 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:55:45.488295 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:55:45.585656 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:55:46.550513 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:55:46.550992 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:55:46.551033 | orchestrator | 2025-02-10 09:55:46.551860 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-10 09:55:46.552165 | orchestrator | 2025-02-10 09:55:46.552989 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-10 09:55:46.554285 | orchestrator | Monday 10 February 2025 09:55:46 +0000 (0:00:01.614) 0:00:03.500 ******* 2025-02-10 09:55:52.251150 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:55:52.251340 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:55:52.251358 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:55:52.251372 | orchestrator | ok: [testbed-manager] 2025-02-10 09:55:52.251384 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:55:52.251403 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:55:52.251637 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:55:52.251666 | orchestrator | 2025-02-10 09:55:52.251883 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-10 09:55:52.252003 | orchestrator | 2025-02-10 09:55:52.252350 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-10 09:55:52.252647 | orchestrator | Monday 10 February 2025 09:55:52 +0000 (0:00:05.700) 0:00:09.201 ******* 2025-02-10 09:55:52.460024 | orchestrator | skipping: [testbed-manager] 2025-02-10 09:55:52.556787 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:55:52.655131 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:55:52.756386 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:55:52.842740 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:55:52.889977 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:55:52.891055 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:55:52.891120 | orchestrator | 2025-02-10 09:55:52.891766 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:55:52.893253 | orchestrator | 2025-02-10 09:55:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:55:52.893346 | orchestrator | 2025-02-10 09:55:52 | INFO  | Please wait and do not abort execution. 2025-02-10 09:55:52.893380 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.893647 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.894678 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.895046 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.896125 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.896593 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.897094 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:55:52.897186 | orchestrator | 2025-02-10 09:55:52.897644 | orchestrator | 2025-02-10 09:55:52.897972 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:55:52.898310 | orchestrator | Monday 10 February 2025 09:55:52 +0000 (0:00:00.639) 0:00:09.840 ******* 2025-02-10 09:55:52.898746 | orchestrator | =============================================================================== 2025-02-10 09:55:52.899313 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.70s 2025-02-10 09:55:52.899749 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.63s 2025-02-10 09:55:52.899858 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.61s 2025-02-10 09:55:52.900266 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2025-02-10 09:55:53.632009 | orchestrator | + osism validate ceph-mons 2025-02-10 09:56:16.268570 | orchestrator | 2025-02-10 09:56:16.268768 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-02-10 09:56:16.268917 | orchestrator | 2025-02-10 09:56:16.268952 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-02-10 09:56:16.268975 | orchestrator | Monday 10 February 2025 09:55:59 +0000 (0:00:00.441) 0:00:00.441 ******* 2025-02-10 09:56:16.269001 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:16.269024 | orchestrator | 2025-02-10 09:56:16.269044 | orchestrator | TASK [Create report output directory] ****************************************** 2025-02-10 09:56:16.269060 | orchestrator | Monday 10 February 2025 09:55:59 +0000 (0:00:00.677) 0:00:01.119 ******* 2025-02-10 09:56:16.269076 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:16.269091 | orchestrator | 2025-02-10 09:56:16.269106 | orchestrator | TASK [Define report vars] ****************************************************** 2025-02-10 09:56:16.269122 | orchestrator | Monday 10 February 2025 09:56:00 +0000 (0:00:00.960) 0:00:02.079 ******* 2025-02-10 09:56:16.269137 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:16.269153 | orchestrator | 2025-02-10 09:56:16.269168 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-02-10 09:56:16.269184 | orchestrator | Monday 10 February 2025 09:56:01 +0000 (0:00:00.286) 0:00:02.366 ******* 2025-02-10 09:56:16.269200 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:16.269215 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:16.269230 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:16.269247 | orchestrator | 2025-02-10 09:56:16.269262 | orchestrator | TASK [Get container info] ****************************************************** 2025-02-10 09:56:16.269303 | orchestrator | Monday 10 February 2025 09:56:01 +0000 (0:00:00.320) 0:00:02.686 ******* 2025-02-10 09:56:16.269329 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:16.269352 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:16.269375 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:16.269401 | orchestrator | 2025-02-10 09:56:16.269424 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-02-10 09:56:16.269449 | orchestrator | Monday 10 February 2025 09:56:02 +0000 (0:00:01.105) 0:00:03.792 ******* 2025-02-10 09:56:16.269475 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.269491 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:56:16.269505 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:56:16.269518 | orchestrator | 2025-02-10 09:56:16.269532 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-02-10 09:56:16.269546 | orchestrator | Monday 10 February 2025 09:56:02 +0000 (0:00:00.330) 0:00:04.122 ******* 2025-02-10 09:56:16.269559 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:16.269573 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:16.269586 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:16.269601 | orchestrator | 2025-02-10 09:56:16.269625 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:56:16.269649 | orchestrator | Monday 10 February 2025 09:56:03 +0000 (0:00:00.559) 0:00:04.682 ******* 2025-02-10 09:56:16.269672 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:16.269768 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:16.269820 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:16.269845 | orchestrator | 2025-02-10 09:56:16.269868 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-02-10 09:56:16.269894 | orchestrator | Monday 10 February 2025 09:56:03 +0000 (0:00:00.352) 0:00:05.034 ******* 2025-02-10 09:56:16.269917 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.269935 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:56:16.269949 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:56:16.269962 | orchestrator | 2025-02-10 09:56:16.269976 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-02-10 09:56:16.269989 | orchestrator | Monday 10 February 2025 09:56:04 +0000 (0:00:00.345) 0:00:05.380 ******* 2025-02-10 09:56:16.270007 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:16.270106 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:16.270133 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:16.270159 | orchestrator | 2025-02-10 09:56:16.270204 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:56:16.270233 | orchestrator | Monday 10 February 2025 09:56:04 +0000 (0:00:00.333) 0:00:05.713 ******* 2025-02-10 09:56:16.270259 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.270287 | orchestrator | 2025-02-10 09:56:16.270314 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:56:16.270340 | orchestrator | Monday 10 February 2025 09:56:05 +0000 (0:00:00.766) 0:00:06.479 ******* 2025-02-10 09:56:16.270369 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.270396 | orchestrator | 2025-02-10 09:56:16.270421 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:56:16.270449 | orchestrator | Monday 10 February 2025 09:56:05 +0000 (0:00:00.300) 0:00:06.780 ******* 2025-02-10 09:56:16.270475 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.270502 | orchestrator | 2025-02-10 09:56:16.270527 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:16.270558 | orchestrator | Monday 10 February 2025 09:56:05 +0000 (0:00:00.278) 0:00:07.058 ******* 2025-02-10 09:56:16.270577 | orchestrator | 2025-02-10 09:56:16.270591 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:16.270604 | orchestrator | Monday 10 February 2025 09:56:05 +0000 (0:00:00.082) 0:00:07.141 ******* 2025-02-10 09:56:16.270618 | orchestrator | 2025-02-10 09:56:16.270631 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:16.270645 | orchestrator | Monday 10 February 2025 09:56:05 +0000 (0:00:00.084) 0:00:07.225 ******* 2025-02-10 09:56:16.270658 | orchestrator | 2025-02-10 09:56:16.270672 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:56:16.270686 | orchestrator | Monday 10 February 2025 09:56:06 +0000 (0:00:00.094) 0:00:07.320 ******* 2025-02-10 09:56:16.270699 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.270713 | orchestrator | 2025-02-10 09:56:16.270727 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-02-10 09:56:16.270741 | orchestrator | Monday 10 February 2025 09:56:06 +0000 (0:00:00.331) 0:00:07.651 ******* 2025-02-10 09:56:16.270754 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:16.270770 | orchestrator | 2025-02-10 09:56:16.270849 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-02-10 09:56:19.304285 | orchestrator | Monday 10 February 2025 09:56:06 +0000 (0:00:00.251) 0:00:07.903 ******* 2025-02-10 09:56:19.304413 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.304433 | orchestrator | 2025-02-10 09:56:19.304449 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-02-10 09:56:19.304463 | orchestrator | Monday 10 February 2025 09:56:06 +0000 (0:00:00.132) 0:00:08.036 ******* 2025-02-10 09:56:19.304477 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:56:19.304499 | orchestrator | 2025-02-10 09:56:19.304523 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-02-10 09:56:19.304569 | orchestrator | Monday 10 February 2025 09:56:08 +0000 (0:00:01.803) 0:00:09.839 ******* 2025-02-10 09:56:19.304594 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.304617 | orchestrator | 2025-02-10 09:56:19.304639 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-02-10 09:56:19.304663 | orchestrator | Monday 10 February 2025 09:56:08 +0000 (0:00:00.320) 0:00:10.160 ******* 2025-02-10 09:56:19.304685 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:19.304707 | orchestrator | 2025-02-10 09:56:19.304729 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-02-10 09:56:19.304754 | orchestrator | Monday 10 February 2025 09:56:09 +0000 (0:00:00.362) 0:00:10.523 ******* 2025-02-10 09:56:19.304776 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.304834 | orchestrator | 2025-02-10 09:56:19.304860 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-02-10 09:56:19.304884 | orchestrator | Monday 10 February 2025 09:56:09 +0000 (0:00:00.258) 0:00:10.781 ******* 2025-02-10 09:56:19.304938 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.304963 | orchestrator | 2025-02-10 09:56:19.304987 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-02-10 09:56:19.305009 | orchestrator | Monday 10 February 2025 09:56:09 +0000 (0:00:00.274) 0:00:11.056 ******* 2025-02-10 09:56:19.305034 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:19.305057 | orchestrator | 2025-02-10 09:56:19.305080 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-02-10 09:56:19.305106 | orchestrator | Monday 10 February 2025 09:56:09 +0000 (0:00:00.137) 0:00:11.193 ******* 2025-02-10 09:56:19.305128 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.305152 | orchestrator | 2025-02-10 09:56:19.305175 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-02-10 09:56:19.305198 | orchestrator | Monday 10 February 2025 09:56:10 +0000 (0:00:00.137) 0:00:11.330 ******* 2025-02-10 09:56:19.305221 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.305243 | orchestrator | 2025-02-10 09:56:19.305264 | orchestrator | TASK [Gather status data] ****************************************************** 2025-02-10 09:56:19.305288 | orchestrator | Monday 10 February 2025 09:56:10 +0000 (0:00:00.131) 0:00:11.462 ******* 2025-02-10 09:56:19.305311 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:56:19.305335 | orchestrator | 2025-02-10 09:56:19.305357 | orchestrator | TASK [Set health test data] **************************************************** 2025-02-10 09:56:19.305380 | orchestrator | Monday 10 February 2025 09:56:11 +0000 (0:00:01.467) 0:00:12.930 ******* 2025-02-10 09:56:19.305404 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.305427 | orchestrator | 2025-02-10 09:56:19.305451 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-02-10 09:56:19.305475 | orchestrator | Monday 10 February 2025 09:56:11 +0000 (0:00:00.252) 0:00:13.182 ******* 2025-02-10 09:56:19.305499 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:19.305523 | orchestrator | 2025-02-10 09:56:19.305546 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-02-10 09:56:19.305571 | orchestrator | Monday 10 February 2025 09:56:12 +0000 (0:00:00.144) 0:00:13.327 ******* 2025-02-10 09:56:19.305594 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:19.305619 | orchestrator | 2025-02-10 09:56:19.305642 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-02-10 09:56:19.305660 | orchestrator | Monday 10 February 2025 09:56:12 +0000 (0:00:00.161) 0:00:13.489 ******* 2025-02-10 09:56:19.305674 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:19.305687 | orchestrator | 2025-02-10 09:56:19.305701 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-02-10 09:56:19.305714 | orchestrator | Monday 10 February 2025 09:56:12 +0000 (0:00:00.137) 0:00:13.626 ******* 2025-02-10 09:56:19.305727 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:19.305741 | orchestrator | 2025-02-10 09:56:19.305754 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-02-10 09:56:19.305767 | orchestrator | Monday 10 February 2025 09:56:12 +0000 (0:00:00.370) 0:00:13.997 ******* 2025-02-10 09:56:19.305808 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:19.305828 | orchestrator | 2025-02-10 09:56:19.305842 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-02-10 09:56:19.305856 | orchestrator | Monday 10 February 2025 09:56:13 +0000 (0:00:00.291) 0:00:14.288 ******* 2025-02-10 09:56:19.305869 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:19.305884 | orchestrator | 2025-02-10 09:56:19.305898 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:56:19.305914 | orchestrator | Monday 10 February 2025 09:56:13 +0000 (0:00:00.256) 0:00:14.544 ******* 2025-02-10 09:56:19.305938 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:19.305963 | orchestrator | 2025-02-10 09:56:19.305987 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:56:19.306094 | orchestrator | Monday 10 February 2025 09:56:15 +0000 (0:00:02.081) 0:00:16.626 ******* 2025-02-10 09:56:19.306129 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:19.306143 | orchestrator | 2025-02-10 09:56:19.306157 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:56:19.306170 | orchestrator | Monday 10 February 2025 09:56:15 +0000 (0:00:00.313) 0:00:16.939 ******* 2025-02-10 09:56:19.306184 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:19.306198 | orchestrator | 2025-02-10 09:56:19.306236 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:19.594777 | orchestrator | Monday 10 February 2025 09:56:15 +0000 (0:00:00.284) 0:00:17.223 ******* 2025-02-10 09:56:19.594955 | orchestrator | 2025-02-10 09:56:19.594974 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:19.594989 | orchestrator | Monday 10 February 2025 09:56:16 +0000 (0:00:00.091) 0:00:17.315 ******* 2025-02-10 09:56:19.595003 | orchestrator | 2025-02-10 09:56:19.595017 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:19.595032 | orchestrator | Monday 10 February 2025 09:56:16 +0000 (0:00:00.076) 0:00:17.391 ******* 2025-02-10 09:56:19.595046 | orchestrator | 2025-02-10 09:56:19.595060 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-02-10 09:56:19.595073 | orchestrator | Monday 10 February 2025 09:56:16 +0000 (0:00:00.092) 0:00:17.484 ******* 2025-02-10 09:56:19.595087 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:19.595101 | orchestrator | 2025-02-10 09:56:19.595115 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:56:19.595128 | orchestrator | Monday 10 February 2025 09:56:17 +0000 (0:00:01.695) 0:00:19.179 ******* 2025-02-10 09:56:19.595142 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-02-10 09:56:19.595156 | orchestrator |  "msg": [ 2025-02-10 09:56:19.595170 | orchestrator |  "Validator run completed.", 2025-02-10 09:56:19.595261 | orchestrator |  "You can find the report file here:", 2025-02-10 09:56:19.595277 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-02-10T09:55:59+00:00-report.json", 2025-02-10 09:56:19.595293 | orchestrator |  "on the following host:", 2025-02-10 09:56:19.595310 | orchestrator |  "testbed-manager" 2025-02-10 09:56:19.595326 | orchestrator |  ] 2025-02-10 09:56:19.595342 | orchestrator | } 2025-02-10 09:56:19.595358 | orchestrator | 2025-02-10 09:56:19.595374 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:56:19.595390 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-10 09:56:19.595407 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:56:19.595424 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:56:19.595446 | orchestrator | 2025-02-10 09:56:19.595462 | orchestrator | 2025-02-10 09:56:19.595478 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:56:19.595504 | orchestrator | Monday 10 February 2025 09:56:18 +0000 (0:00:00.954) 0:00:20.134 ******* 2025-02-10 09:56:19.595521 | orchestrator | =============================================================================== 2025-02-10 09:56:19.595537 | orchestrator | Aggregate test results step one ----------------------------------------- 2.08s 2025-02-10 09:56:19.595552 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.80s 2025-02-10 09:56:19.595567 | orchestrator | Write report file ------------------------------------------------------- 1.70s 2025-02-10 09:56:19.595583 | orchestrator | Gather status data ------------------------------------------------------ 1.47s 2025-02-10 09:56:19.595598 | orchestrator | Get container info ------------------------------------------------------ 1.11s 2025-02-10 09:56:19.595637 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-02-10 09:56:19.595652 | orchestrator | Print report file information ------------------------------------------- 0.95s 2025-02-10 09:56:19.595665 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2025-02-10 09:56:19.595679 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-02-10 09:56:19.595692 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2025-02-10 09:56:19.595706 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2025-02-10 09:56:19.595720 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.36s 2025-02-10 09:56:19.595733 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2025-02-10 09:56:19.595747 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.35s 2025-02-10 09:56:19.595761 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.33s 2025-02-10 09:56:19.595780 | orchestrator | Print report file information ------------------------------------------- 0.33s 2025-02-10 09:56:19.595819 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2025-02-10 09:56:19.595834 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-02-10 09:56:19.595847 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-02-10 09:56:19.595861 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2025-02-10 09:56:19.595892 | orchestrator | + osism validate ceph-mgrs 2025-02-10 09:56:41.444868 | orchestrator | 2025-02-10 09:56:41.444987 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-02-10 09:56:41.445004 | orchestrator | 2025-02-10 09:56:41.445017 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-02-10 09:56:41.445030 | orchestrator | Monday 10 February 2025 09:56:25 +0000 (0:00:00.616) 0:00:00.617 ******* 2025-02-10 09:56:41.445042 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.445055 | orchestrator | 2025-02-10 09:56:41.445067 | orchestrator | TASK [Create report output directory] ****************************************** 2025-02-10 09:56:41.445079 | orchestrator | Monday 10 February 2025 09:56:26 +0000 (0:00:00.777) 0:00:01.394 ******* 2025-02-10 09:56:41.445092 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.445104 | orchestrator | 2025-02-10 09:56:41.445116 | orchestrator | TASK [Define report vars] ****************************************************** 2025-02-10 09:56:41.445128 | orchestrator | Monday 10 February 2025 09:56:27 +0000 (0:00:01.001) 0:00:02.396 ******* 2025-02-10 09:56:41.445141 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.445154 | orchestrator | 2025-02-10 09:56:41.445167 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-02-10 09:56:41.445179 | orchestrator | Monday 10 February 2025 09:56:27 +0000 (0:00:00.258) 0:00:02.654 ******* 2025-02-10 09:56:41.445191 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.445203 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:41.445215 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:41.445228 | orchestrator | 2025-02-10 09:56:41.445240 | orchestrator | TASK [Get container info] ****************************************************** 2025-02-10 09:56:41.445252 | orchestrator | Monday 10 February 2025 09:56:27 +0000 (0:00:00.360) 0:00:03.015 ******* 2025-02-10 09:56:41.445264 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:41.445276 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:41.445288 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.445301 | orchestrator | 2025-02-10 09:56:41.445313 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-02-10 09:56:41.445325 | orchestrator | Monday 10 February 2025 09:56:29 +0000 (0:00:01.365) 0:00:04.381 ******* 2025-02-10 09:56:41.445338 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.445353 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:56:41.445392 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:56:41.445407 | orchestrator | 2025-02-10 09:56:41.445421 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-02-10 09:56:41.445433 | orchestrator | Monday 10 February 2025 09:56:29 +0000 (0:00:00.326) 0:00:04.708 ******* 2025-02-10 09:56:41.445446 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.445458 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:41.445470 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:41.445482 | orchestrator | 2025-02-10 09:56:41.445494 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:56:41.445506 | orchestrator | Monday 10 February 2025 09:56:30 +0000 (0:00:00.583) 0:00:05.291 ******* 2025-02-10 09:56:41.445519 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.445531 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:41.445543 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:41.445555 | orchestrator | 2025-02-10 09:56:41.445567 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-02-10 09:56:41.445579 | orchestrator | Monday 10 February 2025 09:56:30 +0000 (0:00:00.388) 0:00:05.680 ******* 2025-02-10 09:56:41.445592 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.445604 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:56:41.445616 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:56:41.445629 | orchestrator | 2025-02-10 09:56:41.445641 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-02-10 09:56:41.445653 | orchestrator | Monday 10 February 2025 09:56:30 +0000 (0:00:00.332) 0:00:06.013 ******* 2025-02-10 09:56:41.445665 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.445677 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:56:41.445690 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:56:41.445702 | orchestrator | 2025-02-10 09:56:41.445714 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:56:41.445726 | orchestrator | Monday 10 February 2025 09:56:31 +0000 (0:00:00.357) 0:00:06.370 ******* 2025-02-10 09:56:41.445738 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.445751 | orchestrator | 2025-02-10 09:56:41.445763 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:56:41.445796 | orchestrator | Monday 10 February 2025 09:56:32 +0000 (0:00:00.790) 0:00:07.161 ******* 2025-02-10 09:56:41.445809 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.445821 | orchestrator | 2025-02-10 09:56:41.445833 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:56:41.445858 | orchestrator | Monday 10 February 2025 09:56:32 +0000 (0:00:00.302) 0:00:07.464 ******* 2025-02-10 09:56:41.445871 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.445883 | orchestrator | 2025-02-10 09:56:41.445895 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:41.445907 | orchestrator | Monday 10 February 2025 09:56:32 +0000 (0:00:00.276) 0:00:07.740 ******* 2025-02-10 09:56:41.445919 | orchestrator | 2025-02-10 09:56:41.445931 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:41.445943 | orchestrator | Monday 10 February 2025 09:56:32 +0000 (0:00:00.084) 0:00:07.825 ******* 2025-02-10 09:56:41.445955 | orchestrator | 2025-02-10 09:56:41.445968 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:41.445979 | orchestrator | Monday 10 February 2025 09:56:32 +0000 (0:00:00.072) 0:00:07.898 ******* 2025-02-10 09:56:41.445991 | orchestrator | 2025-02-10 09:56:41.446003 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:56:41.446065 | orchestrator | Monday 10 February 2025 09:56:32 +0000 (0:00:00.100) 0:00:07.998 ******* 2025-02-10 09:56:41.446079 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.446091 | orchestrator | 2025-02-10 09:56:41.446104 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-02-10 09:56:41.446116 | orchestrator | Monday 10 February 2025 09:56:33 +0000 (0:00:00.268) 0:00:08.267 ******* 2025-02-10 09:56:41.446136 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.446149 | orchestrator | 2025-02-10 09:56:41.446172 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-02-10 09:56:41.835815 | orchestrator | Monday 10 February 2025 09:56:33 +0000 (0:00:00.273) 0:00:08.541 ******* 2025-02-10 09:56:41.835946 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.835967 | orchestrator | 2025-02-10 09:56:41.835983 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-02-10 09:56:41.835997 | orchestrator | Monday 10 February 2025 09:56:33 +0000 (0:00:00.116) 0:00:08.657 ******* 2025-02-10 09:56:41.836011 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:56:41.836026 | orchestrator | 2025-02-10 09:56:41.836040 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-02-10 09:56:41.836054 | orchestrator | Monday 10 February 2025 09:56:35 +0000 (0:00:01.733) 0:00:10.391 ******* 2025-02-10 09:56:41.836067 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.836081 | orchestrator | 2025-02-10 09:56:41.836095 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-02-10 09:56:41.836109 | orchestrator | Monday 10 February 2025 09:56:35 +0000 (0:00:00.500) 0:00:10.891 ******* 2025-02-10 09:56:41.836123 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.836137 | orchestrator | 2025-02-10 09:56:41.836151 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-02-10 09:56:41.836164 | orchestrator | Monday 10 February 2025 09:56:36 +0000 (0:00:00.248) 0:00:11.140 ******* 2025-02-10 09:56:41.836178 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.836192 | orchestrator | 2025-02-10 09:56:41.836205 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-02-10 09:56:41.836219 | orchestrator | Monday 10 February 2025 09:56:36 +0000 (0:00:00.149) 0:00:11.290 ******* 2025-02-10 09:56:41.836233 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:56:41.836246 | orchestrator | 2025-02-10 09:56:41.836260 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-02-10 09:56:41.836274 | orchestrator | Monday 10 February 2025 09:56:36 +0000 (0:00:00.164) 0:00:11.454 ******* 2025-02-10 09:56:41.836288 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.836302 | orchestrator | 2025-02-10 09:56:41.836316 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-02-10 09:56:41.836332 | orchestrator | Monday 10 February 2025 09:56:36 +0000 (0:00:00.288) 0:00:11.743 ******* 2025-02-10 09:56:41.836347 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:56:41.836363 | orchestrator | 2025-02-10 09:56:41.836378 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:56:41.836393 | orchestrator | Monday 10 February 2025 09:56:36 +0000 (0:00:00.302) 0:00:12.046 ******* 2025-02-10 09:56:41.836409 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.836428 | orchestrator | 2025-02-10 09:56:41.836444 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:56:41.836459 | orchestrator | Monday 10 February 2025 09:56:38 +0000 (0:00:01.490) 0:00:13.537 ******* 2025-02-10 09:56:41.836474 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.836490 | orchestrator | 2025-02-10 09:56:41.836506 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:56:41.836540 | orchestrator | Monday 10 February 2025 09:56:38 +0000 (0:00:00.304) 0:00:13.841 ******* 2025-02-10 09:56:41.836556 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.836572 | orchestrator | 2025-02-10 09:56:41.836587 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:41.836602 | orchestrator | Monday 10 February 2025 09:56:39 +0000 (0:00:00.274) 0:00:14.116 ******* 2025-02-10 09:56:41.836618 | orchestrator | 2025-02-10 09:56:41.836633 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:41.836649 | orchestrator | Monday 10 February 2025 09:56:39 +0000 (0:00:00.077) 0:00:14.193 ******* 2025-02-10 09:56:41.836690 | orchestrator | 2025-02-10 09:56:41.836704 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:56:41.836718 | orchestrator | Monday 10 February 2025 09:56:39 +0000 (0:00:00.082) 0:00:14.275 ******* 2025-02-10 09:56:41.836732 | orchestrator | 2025-02-10 09:56:41.836745 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-02-10 09:56:41.836759 | orchestrator | Monday 10 February 2025 09:56:39 +0000 (0:00:00.301) 0:00:14.577 ******* 2025-02-10 09:56:41.836793 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:41.836808 | orchestrator | 2025-02-10 09:56:41.836822 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:56:41.836836 | orchestrator | Monday 10 February 2025 09:56:40 +0000 (0:00:01.445) 0:00:16.023 ******* 2025-02-10 09:56:41.836850 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-02-10 09:56:41.836863 | orchestrator |  "msg": [ 2025-02-10 09:56:41.836877 | orchestrator |  "Validator run completed.", 2025-02-10 09:56:41.836891 | orchestrator |  "You can find the report file here:", 2025-02-10 09:56:41.836905 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-02-10T09:56:26+00:00-report.json", 2025-02-10 09:56:41.836920 | orchestrator |  "on the following host:", 2025-02-10 09:56:41.836934 | orchestrator |  "testbed-manager" 2025-02-10 09:56:41.836947 | orchestrator |  ] 2025-02-10 09:56:41.836961 | orchestrator | } 2025-02-10 09:56:41.836975 | orchestrator | 2025-02-10 09:56:41.836989 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:56:41.837004 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:56:41.837020 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:56:41.837054 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:56:42.116855 | orchestrator | 2025-02-10 09:56:42.116983 | orchestrator | 2025-02-10 09:56:42.117003 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:56:42.117020 | orchestrator | Monday 10 February 2025 09:56:41 +0000 (0:00:00.461) 0:00:16.485 ******* 2025-02-10 09:56:42.117035 | orchestrator | =============================================================================== 2025-02-10 09:56:42.117049 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.73s 2025-02-10 09:56:42.117073 | orchestrator | Aggregate test results step one ----------------------------------------- 1.49s 2025-02-10 09:56:42.117098 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2025-02-10 09:56:42.117122 | orchestrator | Get container info ------------------------------------------------------ 1.37s 2025-02-10 09:56:42.117146 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2025-02-10 09:56:42.117172 | orchestrator | Aggregate test results step one ----------------------------------------- 0.79s 2025-02-10 09:56:42.117198 | orchestrator | Get timestamp for report file ------------------------------------------- 0.78s 2025-02-10 09:56:42.117224 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2025-02-10 09:56:42.117249 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.50s 2025-02-10 09:56:42.117277 | orchestrator | Print report file information ------------------------------------------- 0.46s 2025-02-10 09:56:42.117303 | orchestrator | Flush handlers ---------------------------------------------------------- 0.46s 2025-02-10 09:56:42.117330 | orchestrator | Prepare test data ------------------------------------------------------- 0.39s 2025-02-10 09:56:42.117358 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2025-02-10 09:56:42.117422 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.36s 2025-02-10 09:56:42.117451 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2025-02-10 09:56:42.117478 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2025-02-10 09:56:42.117504 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-02-10 09:56:42.117551 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-02-10 09:56:42.117578 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2025-02-10 09:56:42.117602 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2025-02-10 09:56:42.117651 | orchestrator | + osism validate ceph-osds 2025-02-10 09:56:52.370222 | orchestrator | 2025-02-10 09:56:52.370348 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-02-10 09:56:52.370374 | orchestrator | 2025-02-10 09:56:52.370394 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-02-10 09:56:52.370412 | orchestrator | Monday 10 February 2025 09:56:47 +0000 (0:00:00.398) 0:00:00.398 ******* 2025-02-10 09:56:52.370432 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:52.370451 | orchestrator | 2025-02-10 09:56:52.370469 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-10 09:56:52.370489 | orchestrator | Monday 10 February 2025 09:56:48 +0000 (0:00:00.713) 0:00:01.111 ******* 2025-02-10 09:56:52.370508 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:52.370527 | orchestrator | 2025-02-10 09:56:52.370547 | orchestrator | TASK [Create report output directory] ****************************************** 2025-02-10 09:56:52.370565 | orchestrator | Monday 10 February 2025 09:56:48 +0000 (0:00:00.242) 0:00:01.353 ******* 2025-02-10 09:56:52.370585 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:56:52.370605 | orchestrator | 2025-02-10 09:56:52.370625 | orchestrator | TASK [Define report vars] ****************************************************** 2025-02-10 09:56:52.370645 | orchestrator | Monday 10 February 2025 09:56:49 +0000 (0:00:01.055) 0:00:02.409 ******* 2025-02-10 09:56:52.370665 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:56:52.370686 | orchestrator | 2025-02-10 09:56:52.370706 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-02-10 09:56:52.370725 | orchestrator | Monday 10 February 2025 09:56:49 +0000 (0:00:00.132) 0:00:02.541 ******* 2025-02-10 09:56:52.370744 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:56:52.370791 | orchestrator | 2025-02-10 09:56:52.370810 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-02-10 09:56:52.370827 | orchestrator | Monday 10 February 2025 09:56:49 +0000 (0:00:00.141) 0:00:02.683 ******* 2025-02-10 09:56:52.370844 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:56:52.370860 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:56:52.370877 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:56:52.370895 | orchestrator | 2025-02-10 09:56:52.370913 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-02-10 09:56:52.370932 | orchestrator | Monday 10 February 2025 09:56:50 +0000 (0:00:00.368) 0:00:03.051 ******* 2025-02-10 09:56:52.370951 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:56:52.370968 | orchestrator | 2025-02-10 09:56:52.370985 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-02-10 09:56:52.371002 | orchestrator | Monday 10 February 2025 09:56:50 +0000 (0:00:00.177) 0:00:03.229 ******* 2025-02-10 09:56:52.371019 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:56:52.371035 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:56:52.371051 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:56:52.371066 | orchestrator | 2025-02-10 09:56:52.371083 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-02-10 09:56:52.371100 | orchestrator | Monday 10 February 2025 09:56:50 +0000 (0:00:00.370) 0:00:03.600 ******* 2025-02-10 09:56:52.371116 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:56:52.371161 | orchestrator | 2025-02-10 09:56:52.371180 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:56:52.371197 | orchestrator | Monday 10 February 2025 09:56:51 +0000 (0:00:00.607) 0:00:04.207 ******* 2025-02-10 09:56:52.371214 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:56:52.371247 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:56:52.371264 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:56:52.371280 | orchestrator | 2025-02-10 09:56:52.371298 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-02-10 09:56:52.371314 | orchestrator | Monday 10 February 2025 09:56:52 +0000 (0:00:00.551) 0:00:04.759 ******* 2025-02-10 09:56:52.371334 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5367447bf990062b7fdb3c8ee7fd8b97d69b434284cc46f3879a31c43fd71ef2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:56:52.371412 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f6d1d4d9b8b8ae6c315b11e1c1983593c806a8980c205bb4dd2d0f2647bd50d0', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:56:52.371433 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e771188992b8328dc6a112310862719c28a82880e91927b7d1a006949da019d1', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:56:52.371451 | orchestrator | skipping: [testbed-node-3] => (item={'id': '000c3c7fd42df81fc29893415a9cc67f15c72feaa1ea44e3f5e59002302a38f3', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-02-10 09:56:52.371470 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c99e3826ac0d95fadbb23da559e0d3efff614fd90fca206fab3eee21d1d9106', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-02-10 09:56:52.371508 | orchestrator | skipping: [testbed-node-3] => (item={'id': '065fbc784bb39cb51d01510fe65601bdbb5f3ed2081126025afb76a7920e4b0a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2025-02-10 09:56:52.673997 | orchestrator | skipping: [testbed-node-3] => (item={'id': '41b81bd8ad91cb81cb28cad4d495eec8279d1c57c8f610dd4e5ce6bc17dd824d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:56:52.674147 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9179ef3308be6b9b33d15f8044bdf2d3bdb5197675627bd69bfe22b78aede4b7', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-02-10 09:56:52.674161 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8e1c341b748c7207ecd7fec24ddb5a704a971269095a575d06cd96fcdde262c6', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-02-10 09:56:52.674174 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd126dca7d93437ea5f1a16de225b1f3724d5d8c36ff0dc3b8059d05ccf6dc849', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-02-10 09:56:52.674185 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c4f350807dbd189ce52fbc4d3d0829c13f68bc1289a0c7fd30ab5a910a8576f', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2025-02-10 09:56:52.674220 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b2438c70634e4fd6b96e6df9f3316744028b5ae597687213d05e73d767681720', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 27 minutes'})  2025-02-10 09:56:52.674232 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd9b684596eb469286a8f7a0208f026c8d1ab85edb8cb2660fd49cc16c8e54e84', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:56:52.674242 | orchestrator | ok: [testbed-node-3] => (item={'id': '12b7dafa46122d3e9278832c6003f78d472f9b14ecdf268c90ec02643c3ae6b8', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:56:52.674254 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bfb1e482ffdb2e5f259c70795f325e9f52e72a766fac19fe31cb632478a62731', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 33 minutes'})  2025-02-10 09:56:52.674265 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c6fc52d8e5f175f3940d94ce9233141d7e255c4cdc14255b0065f34387880b2d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:56:52.674275 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b10c88a85fdca5260a92c4f2b57c91f599abdd25c68aa6adf434c9be3b75c6ad', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:56:52.674286 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4544e272868875e3a6ecd19c4622f627b8499624d5e560f6c70916f864b6a32b', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:56:52.674297 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea8bf0fc11be20e2d3f69255cab90d03d537351cb595e63ba9b008fd4da37536', 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:56:52.674316 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd4a8cf0ac8d61fb5fd39f669bcdc2bbdece6884082b129528eec04c3fa2ba9b2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 37 minutes'})  2025-02-10 09:56:52.674341 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7167b1f49045486d0f277899c58aa838cfc1337aa44db567adad36a1a2436c2f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:56:52.674352 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcdb4846896a807fdc82ae751837959f354016eb685055bb030baf732cc0dc16', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:56:52.674361 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ad95664867e14e7f4a46f39ae315b9e074cae74cbef876b00522741ef0477334', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:56:52.674374 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90746fe7dd02fa5f5e65616f57fde85253954b2476d5c114efcda9a277fae70b', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-02-10 09:56:52.674384 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e0afb5862dd6bed816b4ada326698b7f8cf4e2365886d2cfd35b74b498c2978', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2025-02-10 09:56:52.674399 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e9f69e8f2ab26360c417ce1703e16fc97182a450ca9d4ae16c8a8481585c235f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-02-10 09:56:52.674409 | orchestrator | skipping: [testbed-node-4] => (item={'id': '502debfdbc9aa52ea7bcf0fbfd4714e8c5967cfa552ca6f004564fca8d9e62cc', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:56:52.674419 | orchestrator | skipping: [testbed-node-4] => (item={'id': '24fa8b91edb28eddee9d89c611cb5749a135517f54f45ffb1303d031a6bb70e1', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-02-10 09:56:52.674429 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7c1997b1421f5da22db3a8b83a02929bad87d7c5539ee60747615035d3b645f9', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-02-10 09:56:52.674439 | orchestrator | skipping: [testbed-node-4] => (item={'id': '11add24c7c1a230759aa7c39615bfd74a4a145ae666728bc45dfe5733b2ab97d', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-02-10 09:56:52.674449 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd758bb84f842616b57063087f7eb508c72f50dfaecf35767b3bb70fb3a78b71f', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2025-02-10 09:56:52.674458 | orchestrator | skipping: [testbed-node-4] => (item={'id': '59265a6bb40a3f208fb06087f577ecde305851f37c7375731671f2e70c1a20d6', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 27 minutes'})  2025-02-10 09:56:52.674468 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e9bdcd55cbb78f09d278fada107b6f1d08590accb1714c481a0986631688a824', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:56:52.674478 | orchestrator | ok: [testbed-node-4] => (item={'id': '4bd1ef728a8769ebc782ad2b81f8a889d2fddab16f17054f65d938610e8d9d03', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:56:52.674494 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fc1d1d7f9e22925ad2d6f413f13cf66bb9fac2ae0acdf4a1b493d160fe696c43', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 33 minutes'})  2025-02-10 09:57:01.715562 | orchestrator | skipping: [testbed-node-4] => (item={'id': '47d2b072063f4e5954e60d31256fa5d1eedb1ccddc04104a02ddaed5aa128976', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:57:01.715659 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aa12f3a47cccb25e083af386a6f42bb951bec5cff286fa913a4c740a67a9ed89', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:57:01.716881 | orchestrator | skipping: [testbed-node-4] => (item={'id': '762db99818601572303c825665716b7a0e990fda92eb6234ed8cf898770b93f6', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:57:01.716924 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a774249f39734fdd15670a14392a12513f53939a311be3a1e2fc5bae08163bbd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:57:01.716954 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18afc4631afffef70f19342c34dac68776276fcdc7f88cd173a698361c82c136', 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 37 minutes'})  2025-02-10 09:57:01.716960 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb01bced91bf4e9213135c8f938c6d7305881b371eed93631f179014c173804f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:57:01.716965 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f205dbea72c4cb4f5d7f408ea570f90a087872f75280fe61ad9a23bd7a998e8b', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-02-10 09:57:01.716970 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6be55263b59be72a6a1d48000f173c12fbdc34d1eaed9466e6a2dc1205c85c14', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-02-10 09:57:01.716976 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0f6588cce1d7facde6157819183ee60de895ae81ae863a9803bab506feed286d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-02-10 09:57:01.716982 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7095c356656d15a6843066b202702c498c9e68deb28ba7e78f1448fd5ae8231a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-02-10 09:57:01.716990 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f1fc36817d2bda9048c44572c1bd42f2614dfe88b48c2e3566dfd6c882c9fc2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2025-02-10 09:57:01.716997 | orchestrator | skipping: [testbed-node-5] => (item={'id': '449c8240d6385d0782c9415c6ee8feb3092b58d4e652cd991e37b43fdc16f3ae', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-02-10 09:57:01.717012 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3aa7d19d9f7dcfa81866f09df43d0db17ec59f608d3c18af307baafd2d0d0218', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-02-10 09:57:01.717018 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a9a9476089d1418c4a765e937954508bcb4942bd2eb96435a7f4aa556d0e2546', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2025-02-10 09:57:01.717040 | orchestrator | skipping: [testbed-node-5] => (item={'id': '68828c683928d2ce21394cdb891a9f097b131f1ce0449e31f674589dd85a8a19', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2025-02-10 09:57:01.717046 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2af16f45345dc8a48170aced21e4ebd6eecc647075782a175ec8da9b83f85360', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2025-02-10 09:57:01.717051 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fc71b0f5f34ef92edee2deb5cfbd0ddc0dfd486ebc6a4141f0998d432d34e04a', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 27 minutes'})  2025-02-10 09:57:01.717068 | orchestrator | ok: [testbed-node-5] => (item={'id': '84dd20e82dcf278fa55c5b0fe2845809c4f85730fd9e301efa56b61de6c1a767', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:57:01.717076 | orchestrator | ok: [testbed-node-5] => (item={'id': '23c028de3b061d0489184b7da32754fc23e9309714a451e26340332ab57fae71', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 29 minutes'}) 2025-02-10 09:57:01.717081 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c8a8133dfe66f8374053e1c3ef1d02d093c5cf745cbdcd6c2a5326a86296050', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 33 minutes'})  2025-02-10 09:57:01.717086 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ef0172c8468410e74500905e1e0fc890e0a8e2e32338a0a4d4440f3f724412d8', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:57:01.717091 | orchestrator | skipping: [testbed-node-5] => (item={'id': '66e1c179ed9b38ee8d586dd21ab957e4596ae06a9b9e5faa26624fc670e681b2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 35 minutes (healthy)'})  2025-02-10 09:57:01.717096 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c77eb3bb069b4cc648e3092bb66c960e86ee2111f30edaba53fd7acf3a195fea', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:57:01.717101 | orchestrator | skipping: [testbed-node-5] => (item={'id': '29f6f3231194dd61b53d2d7aa2f4d32a8bb23bd3b0a5da6353c60cda7123c6c2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 36 minutes'})  2025-02-10 09:57:01.717106 | orchestrator | skipping: [testbed-node-5] => (item={'id': '093d4b9e5a61100bce61d4a6df05240432d9310aaee1e50b91de1d6b3e2f1e12', 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 37 minutes'})  2025-02-10 09:57:01.717111 | orchestrator | 2025-02-10 09:57:01.717117 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-02-10 09:57:01.717123 | orchestrator | Monday 10 February 2025 09:56:52 +0000 (0:00:00.579) 0:00:05.338 ******* 2025-02-10 09:57:01.717128 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:01.717134 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:01.717139 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:01.717144 | orchestrator | 2025-02-10 09:57:01.717149 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-02-10 09:57:01.717154 | orchestrator | Monday 10 February 2025 09:56:53 +0000 (0:00:00.379) 0:00:05.717 ******* 2025-02-10 09:57:01.717159 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:01.717165 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:01.717170 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:01.717174 | orchestrator | 2025-02-10 09:57:01.717179 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-02-10 09:57:01.717184 | orchestrator | Monday 10 February 2025 09:56:53 +0000 (0:00:00.543) 0:00:06.261 ******* 2025-02-10 09:57:01.717189 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:01.717194 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:01.717199 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:01.717204 | orchestrator | 2025-02-10 09:57:01.717211 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:57:01.717216 | orchestrator | Monday 10 February 2025 09:56:53 +0000 (0:00:00.332) 0:00:06.593 ******* 2025-02-10 09:57:01.717221 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:01.717226 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:01.717234 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:01.717239 | orchestrator | 2025-02-10 09:57:01.717243 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-02-10 09:57:01.717254 | orchestrator | Monday 10 February 2025 09:56:54 +0000 (0:00:00.354) 0:00:06.948 ******* 2025-02-10 09:57:15.294400 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-02-10 09:57:15.294534 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-02-10 09:57:15.294554 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.294571 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-02-10 09:57:15.294585 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-02-10 09:57:15.294599 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.294613 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-02-10 09:57:15.294627 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-02-10 09:57:15.294641 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.294655 | orchestrator | 2025-02-10 09:57:15.294669 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-02-10 09:57:15.294683 | orchestrator | Monday 10 February 2025 09:56:54 +0000 (0:00:00.336) 0:00:07.284 ******* 2025-02-10 09:57:15.294697 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.294712 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.294725 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.294789 | orchestrator | 2025-02-10 09:57:15.294808 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-02-10 09:57:15.294822 | orchestrator | Monday 10 February 2025 09:56:55 +0000 (0:00:00.548) 0:00:07.833 ******* 2025-02-10 09:57:15.294836 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.294850 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.294864 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.294877 | orchestrator | 2025-02-10 09:57:15.294891 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-02-10 09:57:15.294905 | orchestrator | Monday 10 February 2025 09:56:55 +0000 (0:00:00.353) 0:00:08.186 ******* 2025-02-10 09:57:15.294921 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.294937 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.294952 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.294967 | orchestrator | 2025-02-10 09:57:15.294983 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-02-10 09:57:15.294998 | orchestrator | Monday 10 February 2025 09:56:55 +0000 (0:00:00.347) 0:00:08.534 ******* 2025-02-10 09:57:15.295013 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.295028 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.295044 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.295059 | orchestrator | 2025-02-10 09:57:15.295075 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:57:15.295090 | orchestrator | Monday 10 February 2025 09:56:56 +0000 (0:00:00.335) 0:00:08.869 ******* 2025-02-10 09:57:15.295105 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.295120 | orchestrator | 2025-02-10 09:57:15.295135 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:57:15.295151 | orchestrator | Monday 10 February 2025 09:56:56 +0000 (0:00:00.526) 0:00:09.396 ******* 2025-02-10 09:57:15.295166 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.295182 | orchestrator | 2025-02-10 09:57:15.295197 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:57:15.295212 | orchestrator | Monday 10 February 2025 09:56:57 +0000 (0:00:00.778) 0:00:10.174 ******* 2025-02-10 09:57:15.295227 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.295243 | orchestrator | 2025-02-10 09:57:15.295288 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:15.295303 | orchestrator | Monday 10 February 2025 09:56:57 +0000 (0:00:00.277) 0:00:10.451 ******* 2025-02-10 09:57:15.295317 | orchestrator | 2025-02-10 09:57:15.295331 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:15.295344 | orchestrator | Monday 10 February 2025 09:56:57 +0000 (0:00:00.073) 0:00:10.525 ******* 2025-02-10 09:57:15.295359 | orchestrator | 2025-02-10 09:57:15.295372 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:15.295386 | orchestrator | Monday 10 February 2025 09:56:57 +0000 (0:00:00.089) 0:00:10.615 ******* 2025-02-10 09:57:15.295400 | orchestrator | 2025-02-10 09:57:15.295414 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:57:15.295428 | orchestrator | Monday 10 February 2025 09:56:58 +0000 (0:00:00.089) 0:00:10.704 ******* 2025-02-10 09:57:15.295441 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.295455 | orchestrator | 2025-02-10 09:57:15.295469 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-02-10 09:57:15.295483 | orchestrator | Monday 10 February 2025 09:56:58 +0000 (0:00:00.269) 0:00:10.974 ******* 2025-02-10 09:57:15.295496 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.295510 | orchestrator | 2025-02-10 09:57:15.295524 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:57:15.295537 | orchestrator | Monday 10 February 2025 09:56:58 +0000 (0:00:00.265) 0:00:11.239 ******* 2025-02-10 09:57:15.295551 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.295565 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.295578 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.295592 | orchestrator | 2025-02-10 09:57:15.295606 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-02-10 09:57:15.295635 | orchestrator | Monday 10 February 2025 09:56:58 +0000 (0:00:00.312) 0:00:11.552 ******* 2025-02-10 09:57:15.295649 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.295663 | orchestrator | 2025-02-10 09:57:15.295677 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-02-10 09:57:15.295691 | orchestrator | Monday 10 February 2025 09:56:59 +0000 (0:00:00.531) 0:00:12.084 ******* 2025-02-10 09:57:15.295704 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-10 09:57:15.295718 | orchestrator | 2025-02-10 09:57:15.295788 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-02-10 09:57:15.295806 | orchestrator | Monday 10 February 2025 09:57:01 +0000 (0:00:02.304) 0:00:14.389 ******* 2025-02-10 09:57:15.295820 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.295834 | orchestrator | 2025-02-10 09:57:15.295848 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-02-10 09:57:15.295862 | orchestrator | Monday 10 February 2025 09:57:01 +0000 (0:00:00.169) 0:00:14.558 ******* 2025-02-10 09:57:15.295875 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.295889 | orchestrator | 2025-02-10 09:57:15.295903 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-02-10 09:57:15.295917 | orchestrator | Monday 10 February 2025 09:57:02 +0000 (0:00:00.262) 0:00:14.821 ******* 2025-02-10 09:57:15.295931 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.295945 | orchestrator | 2025-02-10 09:57:15.295959 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-02-10 09:57:15.295973 | orchestrator | Monday 10 February 2025 09:57:02 +0000 (0:00:00.145) 0:00:14.966 ******* 2025-02-10 09:57:15.295986 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296000 | orchestrator | 2025-02-10 09:57:15.296014 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:57:15.296028 | orchestrator | Monday 10 February 2025 09:57:02 +0000 (0:00:00.157) 0:00:15.124 ******* 2025-02-10 09:57:15.296042 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296055 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296069 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296094 | orchestrator | 2025-02-10 09:57:15.296108 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-02-10 09:57:15.296122 | orchestrator | Monday 10 February 2025 09:57:02 +0000 (0:00:00.396) 0:00:15.521 ******* 2025-02-10 09:57:15.296136 | orchestrator | changed: [testbed-node-3] 2025-02-10 09:57:15.296150 | orchestrator | changed: [testbed-node-4] 2025-02-10 09:57:15.296164 | orchestrator | changed: [testbed-node-5] 2025-02-10 09:57:15.296177 | orchestrator | 2025-02-10 09:57:15.296192 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-02-10 09:57:15.296205 | orchestrator | Monday 10 February 2025 09:57:04 +0000 (0:00:01.505) 0:00:17.027 ******* 2025-02-10 09:57:15.296219 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296233 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296247 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296267 | orchestrator | 2025-02-10 09:57:15.296281 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-02-10 09:57:15.296295 | orchestrator | Monday 10 February 2025 09:57:04 +0000 (0:00:00.597) 0:00:17.624 ******* 2025-02-10 09:57:15.296309 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296322 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296336 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296350 | orchestrator | 2025-02-10 09:57:15.296364 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-02-10 09:57:15.296378 | orchestrator | Monday 10 February 2025 09:57:05 +0000 (0:00:00.532) 0:00:18.156 ******* 2025-02-10 09:57:15.296392 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.296406 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.296419 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.296433 | orchestrator | 2025-02-10 09:57:15.296447 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-02-10 09:57:15.296461 | orchestrator | Monday 10 February 2025 09:57:05 +0000 (0:00:00.359) 0:00:18.516 ******* 2025-02-10 09:57:15.296474 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296489 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296502 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296516 | orchestrator | 2025-02-10 09:57:15.296530 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-02-10 09:57:15.296544 | orchestrator | Monday 10 February 2025 09:57:06 +0000 (0:00:00.580) 0:00:19.097 ******* 2025-02-10 09:57:15.296558 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.296572 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.296585 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.296599 | orchestrator | 2025-02-10 09:57:15.296613 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-02-10 09:57:15.296627 | orchestrator | Monday 10 February 2025 09:57:06 +0000 (0:00:00.335) 0:00:19.433 ******* 2025-02-10 09:57:15.296640 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.296663 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.296677 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.296691 | orchestrator | 2025-02-10 09:57:15.296705 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-02-10 09:57:15.296719 | orchestrator | Monday 10 February 2025 09:57:07 +0000 (0:00:00.303) 0:00:19.736 ******* 2025-02-10 09:57:15.296733 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296771 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296786 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296800 | orchestrator | 2025-02-10 09:57:15.296814 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-02-10 09:57:15.296827 | orchestrator | Monday 10 February 2025 09:57:07 +0000 (0:00:00.467) 0:00:20.204 ******* 2025-02-10 09:57:15.296841 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296854 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296868 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296881 | orchestrator | 2025-02-10 09:57:15.296895 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-02-10 09:57:15.296921 | orchestrator | Monday 10 February 2025 09:57:08 +0000 (0:00:00.764) 0:00:20.969 ******* 2025-02-10 09:57:15.296935 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.296949 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.296963 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.296976 | orchestrator | 2025-02-10 09:57:15.296990 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-02-10 09:57:15.297004 | orchestrator | Monday 10 February 2025 09:57:08 +0000 (0:00:00.357) 0:00:21.326 ******* 2025-02-10 09:57:15.297017 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.297031 | orchestrator | skipping: [testbed-node-4] 2025-02-10 09:57:15.297045 | orchestrator | skipping: [testbed-node-5] 2025-02-10 09:57:15.297059 | orchestrator | 2025-02-10 09:57:15.297072 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-02-10 09:57:15.297096 | orchestrator | Monday 10 February 2025 09:57:08 +0000 (0:00:00.332) 0:00:21.659 ******* 2025-02-10 09:57:15.640350 | orchestrator | ok: [testbed-node-3] 2025-02-10 09:57:15.640506 | orchestrator | ok: [testbed-node-4] 2025-02-10 09:57:15.640539 | orchestrator | ok: [testbed-node-5] 2025-02-10 09:57:15.640565 | orchestrator | 2025-02-10 09:57:15.640591 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-02-10 09:57:15.640612 | orchestrator | Monday 10 February 2025 09:57:09 +0000 (0:00:00.355) 0:00:22.015 ******* 2025-02-10 09:57:15.640628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:15.640643 | orchestrator | 2025-02-10 09:57:15.640657 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-02-10 09:57:15.640672 | orchestrator | Monday 10 February 2025 09:57:10 +0000 (0:00:00.752) 0:00:22.768 ******* 2025-02-10 09:57:15.640696 | orchestrator | skipping: [testbed-node-3] 2025-02-10 09:57:15.640720 | orchestrator | 2025-02-10 09:57:15.640913 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-02-10 09:57:15.640955 | orchestrator | Monday 10 February 2025 09:57:10 +0000 (0:00:00.303) 0:00:23.071 ******* 2025-02-10 09:57:15.640980 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:15.641004 | orchestrator | 2025-02-10 09:57:15.641024 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-02-10 09:57:15.641040 | orchestrator | Monday 10 February 2025 09:57:12 +0000 (0:00:01.861) 0:00:24.932 ******* 2025-02-10 09:57:15.641056 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:15.641072 | orchestrator | 2025-02-10 09:57:15.641088 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-02-10 09:57:15.641103 | orchestrator | Monday 10 February 2025 09:57:12 +0000 (0:00:00.300) 0:00:25.233 ******* 2025-02-10 09:57:15.641118 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:15.641134 | orchestrator | 2025-02-10 09:57:15.641149 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:15.641164 | orchestrator | Monday 10 February 2025 09:57:12 +0000 (0:00:00.292) 0:00:25.525 ******* 2025-02-10 09:57:15.641180 | orchestrator | 2025-02-10 09:57:15.641195 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:15.641211 | orchestrator | Monday 10 February 2025 09:57:12 +0000 (0:00:00.088) 0:00:25.613 ******* 2025-02-10 09:57:15.641227 | orchestrator | 2025-02-10 09:57:15.641243 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-02-10 09:57:15.641258 | orchestrator | Monday 10 February 2025 09:57:12 +0000 (0:00:00.070) 0:00:25.684 ******* 2025-02-10 09:57:15.641271 | orchestrator | 2025-02-10 09:57:15.641285 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-02-10 09:57:15.641298 | orchestrator | Monday 10 February 2025 09:57:13 +0000 (0:00:00.102) 0:00:25.787 ******* 2025-02-10 09:57:15.641312 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-10 09:57:15.641325 | orchestrator | 2025-02-10 09:57:15.641338 | orchestrator | TASK [Print report file information] ******************************************* 2025-02-10 09:57:15.641384 | orchestrator | Monday 10 February 2025 09:57:14 +0000 (0:00:01.529) 0:00:27.316 ******* 2025-02-10 09:57:15.641398 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-02-10 09:57:15.641412 | orchestrator |  "msg": [ 2025-02-10 09:57:15.641426 | orchestrator |  "Validator run completed.", 2025-02-10 09:57:15.641441 | orchestrator |  "You can find the report file here:", 2025-02-10 09:57:15.641455 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-02-10T09:56:48+00:00-report.json", 2025-02-10 09:57:15.641470 | orchestrator |  "on the following host:", 2025-02-10 09:57:15.641484 | orchestrator |  "testbed-manager" 2025-02-10 09:57:15.641497 | orchestrator |  ] 2025-02-10 09:57:15.641512 | orchestrator | } 2025-02-10 09:57:15.641525 | orchestrator | 2025-02-10 09:57:15.641539 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:57:15.641555 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-10 09:57:15.641571 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:57:15.641584 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-10 09:57:15.641598 | orchestrator | 2025-02-10 09:57:15.641612 | orchestrator | 2025-02-10 09:57:15.641625 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:57:15.641639 | orchestrator | Monday 10 February 2025 09:57:15 +0000 (0:00:00.642) 0:00:27.959 ******* 2025-02-10 09:57:15.641653 | orchestrator | =============================================================================== 2025-02-10 09:57:15.641666 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.30s 2025-02-10 09:57:15.641680 | orchestrator | Aggregate test results step one ----------------------------------------- 1.86s 2025-02-10 09:57:15.641694 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2025-02-10 09:57:15.641724 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.51s 2025-02-10 09:57:15.641738 | orchestrator | Create report output directory ------------------------------------------ 1.06s 2025-02-10 09:57:15.641783 | orchestrator | Aggregate test results step two ----------------------------------------- 0.78s 2025-02-10 09:57:15.641797 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2025-02-10 09:57:15.641811 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.75s 2025-02-10 09:57:15.641824 | orchestrator | Get timestamp for report file ------------------------------------------- 0.71s 2025-02-10 09:57:15.641869 | orchestrator | Print report file information ------------------------------------------- 0.64s 2025-02-10 09:57:15.968359 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.61s 2025-02-10 09:57:15.968505 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.60s 2025-02-10 09:57:15.968537 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.58s 2025-02-10 09:57:15.968564 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.58s 2025-02-10 09:57:15.968588 | orchestrator | Prepare test data ------------------------------------------------------- 0.55s 2025-02-10 09:57:15.968613 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.55s 2025-02-10 09:57:15.968638 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.54s 2025-02-10 09:57:15.968659 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.53s 2025-02-10 09:57:15.968682 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.53s 2025-02-10 09:57:15.968707 | orchestrator | Aggregate test results step one ----------------------------------------- 0.53s 2025-02-10 09:57:15.968820 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-02-10 09:57:15.980062 | orchestrator | + set -e 2025-02-10 09:57:15.980406 | orchestrator | + source /opt/manager-vars.sh 2025-02-10 09:57:15.980424 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-10 09:57:15.980438 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-10 09:57:15.980452 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-10 09:57:15.980478 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-10 09:57:15.980492 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-10 09:57:15.980507 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-10 09:57:15.980521 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 09:57:15.980535 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 09:57:15.980549 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-10 09:57:15.980562 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-10 09:57:15.980576 | orchestrator | ++ export ARA=false 2025-02-10 09:57:15.980590 | orchestrator | ++ ARA=false 2025-02-10 09:57:15.980604 | orchestrator | ++ export TEMPEST=false 2025-02-10 09:57:15.980618 | orchestrator | ++ TEMPEST=false 2025-02-10 09:57:15.980631 | orchestrator | ++ export IS_ZUUL=true 2025-02-10 09:57:15.980645 | orchestrator | ++ IS_ZUUL=true 2025-02-10 09:57:15.980659 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 09:57:15.980673 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.97 2025-02-10 09:57:15.980686 | orchestrator | ++ export EXTERNAL_API=false 2025-02-10 09:57:15.980700 | orchestrator | ++ EXTERNAL_API=false 2025-02-10 09:57:15.980714 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-10 09:57:15.980728 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-10 09:57:15.980772 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-10 09:57:15.980788 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-10 09:57:15.980802 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-10 09:57:15.980816 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-10 09:57:15.980829 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-02-10 09:57:15.980843 | orchestrator | + source /etc/os-release 2025-02-10 09:57:15.980857 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.1 LTS' 2025-02-10 09:57:15.980871 | orchestrator | ++ NAME=Ubuntu 2025-02-10 09:57:15.980884 | orchestrator | ++ VERSION_ID=24.04 2025-02-10 09:57:15.980899 | orchestrator | ++ VERSION='24.04.1 LTS (Noble Numbat)' 2025-02-10 09:57:15.980912 | orchestrator | ++ VERSION_CODENAME=noble 2025-02-10 09:57:15.980926 | orchestrator | ++ ID=ubuntu 2025-02-10 09:57:15.980940 | orchestrator | ++ ID_LIKE=debian 2025-02-10 09:57:15.980954 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-02-10 09:57:15.980974 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-02-10 09:57:16.011402 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-02-10 09:57:16.011516 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-02-10 09:57:16.011532 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-02-10 09:57:16.011544 | orchestrator | ++ LOGO=ubuntu-logo 2025-02-10 09:57:16.011555 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-02-10 09:57:16.011568 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-02-10 09:57:16.011581 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-02-10 09:57:16.011607 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-02-10 09:57:39.109001 | orchestrator | 2025-02-10 09:57:39.254432 | orchestrator | # Status of Elasticsearch 2025-02-10 09:57:39.254529 | orchestrator | 2025-02-10 09:57:39.254544 | orchestrator | + pushd /opt/configuration/contrib 2025-02-10 09:57:39.254558 | orchestrator | + echo 2025-02-10 09:57:39.254569 | orchestrator | + echo '# Status of Elasticsearch' 2025-02-10 09:57:39.254581 | orchestrator | + echo 2025-02-10 09:57:39.254603 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-02-10 09:57:39.254648 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 8; active_shards: 19; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=8 'active'=19 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-02-10 09:57:39.287300 | orchestrator | 2025-02-10 09:57:39.287426 | orchestrator | # Status of MariaDB 2025-02-10 09:57:39.287445 | orchestrator | 2025-02-10 09:57:39.287466 | orchestrator | + echo 2025-02-10 09:57:39.287490 | orchestrator | + echo '# Status of MariaDB' 2025-02-10 09:57:39.287513 | orchestrator | + echo 2025-02-10 09:57:39.287578 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root -p password -H api-int.testbed.osism.xyz -c 1 2025-02-10 09:57:39.314201 | orchestrator | Reading package lists... 2025-02-10 09:57:39.728695 | orchestrator | Building dependency tree... 2025-02-10 09:57:39.729363 | orchestrator | Reading state information... 2025-02-10 09:57:40.270690 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-02-10 09:57:40.458493 | orchestrator | bc set to manually installed. 2025-02-10 09:57:40.458928 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-02-10 09:57:40.458977 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-02-10 09:57:40.535578 | orchestrator | 2025-02-10 09:57:40.535763 | orchestrator | # Status of Prometheus 2025-02-10 09:57:40.535801 | orchestrator | 2025-02-10 09:57:40.535818 | orchestrator | + echo 2025-02-10 09:57:40.535833 | orchestrator | + echo '# Status of Prometheus' 2025-02-10 09:57:40.535847 | orchestrator | + echo 2025-02-10 09:57:40.535862 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-02-10 09:57:40.535894 | orchestrator | Unauthorized 2025-02-10 09:57:40.539331 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-02-10 09:57:40.595682 | orchestrator | Unauthorized 2025-02-10 09:57:40.599789 | orchestrator | 2025-02-10 09:57:41.064853 | orchestrator | # Status of RabbitMQ 2025-02-10 09:57:41.064944 | orchestrator | 2025-02-10 09:57:41.064951 | orchestrator | + echo 2025-02-10 09:57:41.064957 | orchestrator | + echo '# Status of RabbitMQ' 2025-02-10 09:57:41.064962 | orchestrator | + echo 2025-02-10 09:57:41.064968 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-02-10 09:57:41.064986 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-02-10 09:57:41.075168 | orchestrator | 2025-02-10 09:57:41.080046 | orchestrator | # Status of Redis 2025-02-10 09:57:41.080095 | orchestrator | 2025-02-10 09:57:41.080108 | orchestrator | + echo 2025-02-10 09:57:41.080120 | orchestrator | + echo '# Status of Redis' 2025-02-10 09:57:41.080133 | orchestrator | + echo 2025-02-10 09:57:41.080142 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-02-10 09:57:41.080158 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001396s;;;0.000000;10.000000 2025-02-10 09:57:41.080561 | orchestrator | 2025-02-10 09:57:42.783556 | orchestrator | + popd 2025-02-10 09:57:42.783684 | orchestrator | + echo 2025-02-10 09:57:42.783703 | orchestrator | # Create backup of MariaDB database 2025-02-10 09:57:42.783776 | orchestrator | 2025-02-10 09:57:42.783804 | orchestrator | + echo '# Create backup of MariaDB database' 2025-02-10 09:57:42.783846 | orchestrator | + echo 2025-02-10 09:57:42.783885 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-02-10 09:57:42.783962 | orchestrator | 2025-02-10 09:57:42 | INFO  | Task ef79b1b7-ee1a-4528-91c9-9aa13cbd7b42 (mariadb_backup) was prepared for execution. 2025-02-10 09:57:46.357919 | orchestrator | 2025-02-10 09:57:42 | INFO  | It takes a moment until task ef79b1b7-ee1a-4528-91c9-9aa13cbd7b42 (mariadb_backup) has been started and output is visible here. 2025-02-10 09:57:46.358081 | orchestrator | 2025-02-10 09:57:46.360061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:57:46.360659 | orchestrator | 2025-02-10 09:57:46.361812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:57:46.362318 | orchestrator | Monday 10 February 2025 09:57:46 +0000 (0:00:00.189) 0:00:00.189 ******* 2025-02-10 09:57:46.707245 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:46.828865 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:46.829930 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:46.829979 | orchestrator | 2025-02-10 09:57:47.617068 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:57:47.617322 | orchestrator | Monday 10 February 2025 09:57:46 +0000 (0:00:00.470) 0:00:00.660 ******* 2025-02-10 09:57:47.617367 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-10 09:57:47.618217 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-10 09:57:47.618280 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-10 09:57:47.619242 | orchestrator | 2025-02-10 09:57:47.619696 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-10 09:57:47.620667 | orchestrator | 2025-02-10 09:57:47.621245 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-10 09:57:47.621810 | orchestrator | Monday 10 February 2025 09:57:47 +0000 (0:00:00.793) 0:00:01.454 ******* 2025-02-10 09:57:48.183351 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:57:48.184069 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:57:48.184107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:57:48.184120 | orchestrator | 2025-02-10 09:57:48.184140 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:57:48.184809 | orchestrator | Monday 10 February 2025 09:57:48 +0000 (0:00:00.565) 0:00:02.019 ******* 2025-02-10 09:57:49.104465 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:57:49.107192 | orchestrator | 2025-02-10 09:57:49.107907 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-02-10 09:57:49.107967 | orchestrator | Monday 10 February 2025 09:57:49 +0000 (0:00:00.917) 0:00:02.937 ******* 2025-02-10 09:57:52.840614 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:57:52.843148 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:57:52.846762 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:57:52.846805 | orchestrator | 2025-02-10 09:57:52.847787 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-02-10 09:57:52.847849 | orchestrator | Monday 10 February 2025 09:57:52 +0000 (0:00:03.736) 0:00:06.674 ******* 2025-02-10 09:58:11.310593 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-10 09:58:11.310802 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-02-10 09:58:11.310837 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:58:11.310874 | orchestrator | mariadb_bootstrap_restart 2025-02-10 09:58:11.405797 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:11.406443 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:11.406485 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:58:11.409910 | orchestrator | 2025-02-10 09:58:11.410069 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-10 09:58:11.410091 | orchestrator | skipping: no hosts matched 2025-02-10 09:58:11.410100 | orchestrator | 2025-02-10 09:58:11.410112 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:58:11.410466 | orchestrator | skipping: no hosts matched 2025-02-10 09:58:11.411456 | orchestrator | 2025-02-10 09:58:11.411676 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-10 09:58:11.412860 | orchestrator | skipping: no hosts matched 2025-02-10 09:58:11.413032 | orchestrator | 2025-02-10 09:58:11.414402 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-10 09:58:11.414671 | orchestrator | 2025-02-10 09:58:11.415680 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-10 09:58:11.415847 | orchestrator | Monday 10 February 2025 09:58:11 +0000 (0:00:18.570) 0:00:25.244 ******* 2025-02-10 09:58:11.795331 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:11.915208 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:11.915650 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:11.916440 | orchestrator | 2025-02-10 09:58:11.917301 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-10 09:58:11.917606 | orchestrator | Monday 10 February 2025 09:58:11 +0000 (0:00:00.509) 0:00:25.753 ******* 2025-02-10 09:58:12.272282 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:12.316118 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:12.316396 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:12.317654 | orchestrator | 2025-02-10 09:58:12.317691 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:58:12.317912 | orchestrator | 2025-02-10 09:58:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:58:12.318280 | orchestrator | 2025-02-10 09:58:12 | INFO  | Please wait and do not abort execution. 2025-02-10 09:58:12.322188 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:58:12.323852 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:58:12.324565 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:58:12.324593 | orchestrator | 2025-02-10 09:58:12.324609 | orchestrator | 2025-02-10 09:58:12.324625 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:58:12.324647 | orchestrator | Monday 10 February 2025 09:58:12 +0000 (0:00:00.401) 0:00:26.154 ******* 2025-02-10 09:58:12.324862 | orchestrator | =============================================================================== 2025-02-10 09:58:12.325581 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.57s 2025-02-10 09:58:12.326397 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.74s 2025-02-10 09:58:12.326447 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.92s 2025-02-10 09:58:12.326949 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-02-10 09:58:12.327426 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.57s 2025-02-10 09:58:12.328122 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.51s 2025-02-10 09:58:12.328568 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-02-10 09:58:12.329172 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2025-02-10 09:58:12.963007 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-02-10 09:58:14.627269 | orchestrator | 2025-02-10 09:58:14 | INFO  | Task b2c5403d-65c5-4b67-964c-090402e32fae (mariadb_backup) was prepared for execution. 2025-02-10 09:58:18.239336 | orchestrator | 2025-02-10 09:58:14 | INFO  | It takes a moment until task b2c5403d-65c5-4b67-964c-090402e32fae (mariadb_backup) has been started and output is visible here. 2025-02-10 09:58:18.239473 | orchestrator | 2025-02-10 09:58:18.239725 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-10 09:58:18.239743 | orchestrator | 2025-02-10 09:58:18.239758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-10 09:58:18.240122 | orchestrator | Monday 10 February 2025 09:58:18 +0000 (0:00:00.209) 0:00:00.209 ******* 2025-02-10 09:58:18.575353 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:18.690098 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:18.690302 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:18.691146 | orchestrator | 2025-02-10 09:58:18.691178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-10 09:58:18.691550 | orchestrator | Monday 10 February 2025 09:58:18 +0000 (0:00:00.448) 0:00:00.657 ******* 2025-02-10 09:58:19.519415 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-10 09:58:19.521606 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-10 09:58:19.521665 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-10 09:58:19.522791 | orchestrator | 2025-02-10 09:58:19.522841 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-10 09:58:19.523399 | orchestrator | 2025-02-10 09:58:19.523931 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-10 09:58:19.524948 | orchestrator | Monday 10 February 2025 09:58:19 +0000 (0:00:00.830) 0:00:01.488 ******* 2025-02-10 09:58:20.078107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-10 09:58:20.078950 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-10 09:58:20.078994 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-10 09:58:20.079168 | orchestrator | 2025-02-10 09:58:20.079474 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-10 09:58:20.081173 | orchestrator | Monday 10 February 2025 09:58:20 +0000 (0:00:00.562) 0:00:02.050 ******* 2025-02-10 09:58:20.937895 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-10 09:58:20.941482 | orchestrator | 2025-02-10 09:58:20.941965 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-02-10 09:58:20.942011 | orchestrator | Monday 10 February 2025 09:58:20 +0000 (0:00:00.854) 0:00:02.904 ******* 2025-02-10 09:58:25.382133 | orchestrator | ok: [testbed-node-0] 2025-02-10 09:58:25.384275 | orchestrator | ok: [testbed-node-2] 2025-02-10 09:58:25.385041 | orchestrator | ok: [testbed-node-1] 2025-02-10 09:58:25.389326 | orchestrator | 2025-02-10 09:58:25.391358 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-02-10 09:58:43.704717 | orchestrator | Monday 10 February 2025 09:58:25 +0000 (0:00:04.443) 0:00:07.347 ******* 2025-02-10 09:58:43.704863 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-10 09:58:43.707089 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-02-10 09:58:43.707172 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-10 09:58:43.707356 | orchestrator | mariadb_bootstrap_restart 2025-02-10 09:58:43.803982 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:43.807527 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:43.807596 | orchestrator | changed: [testbed-node-0] 2025-02-10 09:58:43.808384 | orchestrator | 2025-02-10 09:58:43.808433 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-10 09:58:43.810211 | orchestrator | skipping: no hosts matched 2025-02-10 09:58:43.810924 | orchestrator | 2025-02-10 09:58:43.811924 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-10 09:58:43.812778 | orchestrator | skipping: no hosts matched 2025-02-10 09:58:43.813939 | orchestrator | 2025-02-10 09:58:43.814695 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-10 09:58:43.816737 | orchestrator | skipping: no hosts matched 2025-02-10 09:58:43.817957 | orchestrator | 2025-02-10 09:58:43.818130 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-10 09:58:43.818640 | orchestrator | 2025-02-10 09:58:43.821799 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-10 09:58:44.146243 | orchestrator | Monday 10 February 2025 09:58:43 +0000 (0:00:18.426) 0:00:25.774 ******* 2025-02-10 09:58:44.146362 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:44.266255 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:44.715881 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:44.716012 | orchestrator | 2025-02-10 09:58:44.716032 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-10 09:58:44.716046 | orchestrator | Monday 10 February 2025 09:58:44 +0000 (0:00:00.458) 0:00:26.232 ******* 2025-02-10 09:58:44.716076 | orchestrator | skipping: [testbed-node-0] 2025-02-10 09:58:44.756259 | orchestrator | skipping: [testbed-node-1] 2025-02-10 09:58:44.757113 | orchestrator | skipping: [testbed-node-2] 2025-02-10 09:58:44.757161 | orchestrator | 2025-02-10 09:58:44.757756 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 09:58:44.758182 | orchestrator | 2025-02-10 09:58:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 09:58:44.759834 | orchestrator | 2025-02-10 09:58:44 | INFO  | Please wait and do not abort execution. 2025-02-10 09:58:44.760132 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-10 09:58:44.760927 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:58:44.762831 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-10 09:58:44.763267 | orchestrator | 2025-02-10 09:58:44.763900 | orchestrator | 2025-02-10 09:58:44.764372 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 09:58:44.765233 | orchestrator | Monday 10 February 2025 09:58:44 +0000 (0:00:00.495) 0:00:26.728 ******* 2025-02-10 09:58:44.766496 | orchestrator | =============================================================================== 2025-02-10 09:58:44.767269 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ----------- 18.43s 2025-02-10 09:58:44.768049 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.44s 2025-02-10 09:58:44.768451 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.85s 2025-02-10 09:58:44.772152 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2025-02-10 09:58:44.774367 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.56s 2025-02-10 09:58:44.774405 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.50s 2025-02-10 09:58:44.774583 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.46s 2025-02-10 09:58:44.775086 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-02-10 09:58:45.405190 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-02-10 09:58:45.412621 | orchestrator | + set -e 2025-02-10 09:58:45.413772 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-10 09:58:45.413802 | orchestrator | ++ export INTERACTIVE=false 2025-02-10 09:58:45.413816 | orchestrator | ++ INTERACTIVE=false 2025-02-10 09:58:45.413829 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-10 09:58:45.413841 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-10 09:58:45.413854 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-10 09:58:45.413873 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-10 09:58:45.459887 | orchestrator | 2025-02-10 09:58:53.506329 | orchestrator | # OpenStack endpoints 2025-02-10 09:58:53.506466 | orchestrator | 2025-02-10 09:58:53.506490 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-10 09:58:53.506505 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-10 09:58:53.506519 | orchestrator | + export OS_CLOUD=admin 2025-02-10 09:58:53.506532 | orchestrator | + OS_CLOUD=admin 2025-02-10 09:58:53.506545 | orchestrator | + echo 2025-02-10 09:58:53.506570 | orchestrator | + echo '# OpenStack endpoints' 2025-02-10 09:58:53.506583 | orchestrator | + echo 2025-02-10 09:58:53.506597 | orchestrator | + openstack endpoint list 2025-02-10 09:58:53.506632 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-02-10 09:58:53.506651 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-02-10 09:58:53.506700 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-02-10 09:58:53.506718 | orchestrator | | 083a520aa9b346db952fa904a4e2ac54 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-02-10 09:58:53.506733 | orchestrator | | 10590adec26f432c9c66b352f98b2348 | RegionOne | ironic-inspector | baremetal-introspection | True | internal | https://api-int.testbed.osism.xyz:5050 | 2025-02-10 09:58:53.506747 | orchestrator | | 4f59ee5592824eb9a849403ca5a377cd | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-02-10 09:58:53.506787 | orchestrator | | 5d0f22ea7cfd46cc8d99a541825695af | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-02-10 09:58:53.506801 | orchestrator | | 6555e83fc9cf48a5aa8d8f7b83ce1181 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-02-10 09:58:53.506814 | orchestrator | | 6e16689bb3ce45fa8e2312cd4ba7a333 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-02-10 09:58:53.506827 | orchestrator | | 74fa7845047b4ac3a069d58cf229a23e | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-02-10 09:58:53.506841 | orchestrator | | 793fcddc6a0c416c869e2b4a5f409015 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-02-10 09:58:53.506854 | orchestrator | | 8c0376761e9048939069c2810453b0db | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-02-10 09:58:53.506866 | orchestrator | | 8d96c63c9e734342b6082460eabcf20c | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-02-10 09:58:53.506879 | orchestrator | | 928c2638751e49f59aa50d32f3e535f3 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-02-10 09:58:53.506892 | orchestrator | | 978e68762d944197b138b4ac9612ae3a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-02-10 09:58:53.506904 | orchestrator | | a1925d400917498d9f4be46f1662f51e | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-02-10 09:58:53.506968 | orchestrator | | a40c7f09cb7041229e70aa686fa349a4 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-02-10 09:58:53.506983 | orchestrator | | a625d0d319d6422d91aeb4463aafbd5e | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-02-10 09:58:53.506997 | orchestrator | | a765bfa06eff44ea9e6e3181bf9cc9b2 | RegionOne | ironic | baremetal | True | internal | https://api-int.testbed.osism.xyz:6385 | 2025-02-10 09:58:53.507010 | orchestrator | | a85fa09b6df44690bce8cd0b350de8c6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-02-10 09:58:53.507024 | orchestrator | | ad5d9e28346b41be96a515c01b35fa2a | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-02-10 09:58:53.507037 | orchestrator | | bbb19a937bea45a5af712105094ea76d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-02-10 09:58:53.507063 | orchestrator | | c6fdccaf5aaf49938906e416a21efed1 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-02-10 09:58:53.834423 | orchestrator | | d606c5235adf429f8779ce0ec04d70a6 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-02-10 09:58:53.834547 | orchestrator | | d6b8b56f855f45edbc5bd45966dc0be2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-02-10 09:58:53.834608 | orchestrator | | e2260a073d7d493c908495aa9f641c6e | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-02-10 09:58:53.834621 | orchestrator | | e3830c290f194840909e912114530933 | RegionOne | ironic | baremetal | True | public | https://api.testbed.osism.xyz:6385 | 2025-02-10 09:58:53.834634 | orchestrator | | e99453747ccb4deb9148c418b087f12d | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-02-10 09:58:53.834646 | orchestrator | | ef83a7c3ffe141bfa52e4f43898a70a1 | RegionOne | ironic-inspector | baremetal-introspection | True | public | https://api.testbed.osism.xyz:5050 | 2025-02-10 09:58:53.834659 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-02-10 09:58:53.834739 | orchestrator | 2025-02-10 09:58:57.075022 | orchestrator | # Cinder 2025-02-10 09:58:57.075155 | orchestrator | 2025-02-10 09:58:57.075176 | orchestrator | + echo 2025-02-10 09:58:57.075201 | orchestrator | + echo '# Cinder' 2025-02-10 09:58:57.075227 | orchestrator | + echo 2025-02-10 09:58:57.075252 | orchestrator | + openstack volume service list 2025-02-10 09:58:57.075306 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-02-10 09:58:57.427597 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-02-10 09:58:57.427808 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-02-10 09:58:57.427844 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.427888 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.427904 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.427918 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-02-10T09:58:48.000000 | 2025-02-10 09:58:57.427932 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-02-10T09:58:48.000000 | 2025-02-10 09:58:57.427946 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.427960 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.427974 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.427988 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-02-10T09:58:49.000000 | 2025-02-10 09:58:57.428005 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-02-10 09:58:57.428038 | orchestrator | 2025-02-10 09:59:00.507922 | orchestrator | # Neutron 2025-02-10 09:59:00.508045 | orchestrator | 2025-02-10 09:59:00.508061 | orchestrator | + echo 2025-02-10 09:59:00.508075 | orchestrator | + echo '# Neutron' 2025-02-10 09:59:00.508090 | orchestrator | + echo 2025-02-10 09:59:00.508104 | orchestrator | + openstack network agent list 2025-02-10 09:59:00.508135 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-02-10 09:59:00.837522 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-02-10 09:59:00.837642 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-02-10 09:59:00.837711 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-02-10 09:59:00.837758 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-02-10 09:59:00.837771 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-02-10 09:59:00.837784 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-02-10 09:59:00.837797 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-02-10 09:59:00.837809 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-02-10 09:59:00.837822 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-02-10 09:59:00.837834 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-02-10 09:59:00.837846 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-02-10 09:59:00.837858 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-02-10 09:59:00.837904 | orchestrator | + openstack network service provider list 2025-02-10 09:59:03.696158 | orchestrator | +---------------+------+---------+ 2025-02-10 09:59:04.049459 | orchestrator | | Service Type | Name | Default | 2025-02-10 09:59:04.049583 | orchestrator | +---------------+------+---------+ 2025-02-10 09:59:04.049601 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-02-10 09:59:04.049616 | orchestrator | +---------------+------+---------+ 2025-02-10 09:59:04.049650 | orchestrator | 2025-02-10 09:59:07.045362 | orchestrator | # Nova 2025-02-10 09:59:07.045536 | orchestrator | 2025-02-10 09:59:07.045560 | orchestrator | + echo 2025-02-10 09:59:07.045575 | orchestrator | + echo '# Nova' 2025-02-10 09:59:07.045595 | orchestrator | + echo 2025-02-10 09:59:07.045619 | orchestrator | + openstack compute service list 2025-02-10 09:59:07.045715 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-02-10 09:59:07.385710 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-02-10 09:59:07.385826 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-02-10 09:59:07.385840 | orchestrator | | 9a91b415-6eec-4e10-8a33-f16e8b506ef0 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-02-10T09:58:59.000000 | 2025-02-10 09:59:07.385851 | orchestrator | | b920b39e-373e-4dff-ae2a-195a520b4fbb | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-02-10T09:58:58.000000 | 2025-02-10 09:59:07.385862 | orchestrator | | 26f353f2-64cc-481c-9960-5b497994882e | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-02-10T09:58:58.000000 | 2025-02-10 09:59:07.385872 | orchestrator | | 3189698d-d7ca-4a21-9491-7632ca5e3346 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-02-10T09:59:01.000000 | 2025-02-10 09:59:07.385883 | orchestrator | | 8017dacc-b5dc-4739-b8a7-94812a425fd4 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-02-10T09:59:03.000000 | 2025-02-10 09:59:07.385904 | orchestrator | | c5afabe3-4964-491f-b9e4-6c75fc3119f5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-02-10T09:59:03.000000 | 2025-02-10 09:59:07.385915 | orchestrator | | 5e56c197-cd8e-4cf7-ac9c-8d5732b78579 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-02-10T09:58:59.000000 | 2025-02-10 09:59:07.385957 | orchestrator | | cc66b085-a648-4e7b-a427-66ec3dde4d94 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-02-10T09:58:59.000000 | 2025-02-10 09:59:07.385975 | orchestrator | | 0f985fe7-5572-4842-8cbf-bc5c80a8df98 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-02-10T09:58:59.000000 | 2025-02-10 09:59:07.385991 | orchestrator | | 50ee101d-692e-41c3-898d-c82f7d16a838 | nova-compute | testbed-node-0-ironic | nova | enabled | up | 2025-02-10T09:58:59.000000 | 2025-02-10 09:59:07.386008 | orchestrator | | 742ed91f-dad1-4200-ad17-31e15de47b2e | nova-compute | testbed-node-2-ironic | nova | enabled | up | 2025-02-10T09:59:00.000000 | 2025-02-10 09:59:07.386092 | orchestrator | | 67abebd8-b326-4ad8-b1c9-3f2cabae1dda | nova-compute | testbed-node-1-ironic | nova | enabled | up | 2025-02-10T09:59:00.000000 | 2025-02-10 09:59:07.386112 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-02-10 09:59:07.386148 | orchestrator | + openstack hypervisor list 2025-02-10 09:59:10.544622 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-02-10 09:59:10.852333 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-02-10 09:59:10.852456 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-02-10 09:59:10.852474 | orchestrator | | e0a5c17b-026d-4a06-a31e-56e5c5c05228 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-02-10 09:59:10.852488 | orchestrator | | 18f2f46d-38ff-4fa5-9218-93a085f3dc47 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-02-10 09:59:10.852502 | orchestrator | | c34a697b-42de-4ed5-b511-7a2e3d70a16e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-02-10 09:59:10.852516 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-02-10 09:59:10.852549 | orchestrator | 2025-02-10 09:59:12.533193 | orchestrator | # Run OpenStack test play 2025-02-10 09:59:12.533315 | orchestrator | 2025-02-10 09:59:12.533327 | orchestrator | + echo 2025-02-10 09:59:12.533337 | orchestrator | + echo '# Run OpenStack test play' 2025-02-10 09:59:12.533347 | orchestrator | + echo 2025-02-10 09:59:12.533356 | orchestrator | + osism apply --environment openstack test 2025-02-10 09:59:12.533380 | orchestrator | 2025-02-10 09:59:12 | INFO  | Trying to run play test in environment openstack 2025-02-10 09:59:12.588183 | orchestrator | 2025-02-10 09:59:12 | INFO  | Task 89421d5a-4dcc-427d-9f86-89f3cdf2b3dc (test) was prepared for execution. 2025-02-10 09:59:16.125275 | orchestrator | 2025-02-10 09:59:12 | INFO  | It takes a moment until task 89421d5a-4dcc-427d-9f86-89f3cdf2b3dc (test) has been started and output is visible here. 2025-02-10 09:59:16.125447 | orchestrator | 2025-02-10 09:59:16.129927 | orchestrator | PLAY [Create test project] ***************************************************** 2025-02-10 09:59:16.129965 | orchestrator | 2025-02-10 09:59:16.129988 | orchestrator | TASK [Create test domain] ****************************************************** 2025-02-10 09:59:16.130481 | orchestrator | Monday 10 February 2025 09:59:16 +0000 (0:00:00.087) 0:00:00.087 ******* 2025-02-10 09:59:19.518581 | orchestrator | changed: [localhost] 2025-02-10 09:59:19.518842 | orchestrator | 2025-02-10 09:59:19.518865 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-02-10 09:59:19.518894 | orchestrator | Monday 10 February 2025 09:59:19 +0000 (0:00:03.397) 0:00:03.484 ******* 2025-02-10 09:59:23.453410 | orchestrator | changed: [localhost] 2025-02-10 09:59:23.454109 | orchestrator | 2025-02-10 09:59:23.454154 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-02-10 09:59:23.454182 | orchestrator | Monday 10 February 2025 09:59:23 +0000 (0:00:03.932) 0:00:07.417 ******* 2025-02-10 09:59:28.850988 | orchestrator | changed: [localhost] 2025-02-10 09:59:28.851251 | orchestrator | 2025-02-10 09:59:28.851283 | orchestrator | TASK [Create test project] ***************************************************** 2025-02-10 09:59:28.851740 | orchestrator | Monday 10 February 2025 09:59:28 +0000 (0:00:05.398) 0:00:12.816 ******* 2025-02-10 09:59:32.660721 | orchestrator | changed: [localhost] 2025-02-10 09:59:32.662795 | orchestrator | 2025-02-10 09:59:32.662825 | orchestrator | TASK [Create test user] ******************************************************** 2025-02-10 09:59:32.663273 | orchestrator | Monday 10 February 2025 09:59:32 +0000 (0:00:03.810) 0:00:16.627 ******* 2025-02-10 09:59:36.629260 | orchestrator | changed: [localhost] 2025-02-10 09:59:48.082607 | orchestrator | 2025-02-10 09:59:48.082801 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-02-10 09:59:48.082824 | orchestrator | Monday 10 February 2025 09:59:36 +0000 (0:00:03.967) 0:00:20.594 ******* 2025-02-10 09:59:48.082860 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-02-10 09:59:52.124239 | orchestrator | changed: [localhost] => (item=member) 2025-02-10 09:59:52.124549 | orchestrator | changed: [localhost] => (item=creator) 2025-02-10 09:59:52.124582 | orchestrator | 2025-02-10 09:59:52.124599 | orchestrator | TASK [Create test server group] ************************************************ 2025-02-10 09:59:52.124614 | orchestrator | Monday 10 February 2025 09:59:48 +0000 (0:00:11.451) 0:00:32.045 ******* 2025-02-10 09:59:52.124702 | orchestrator | changed: [localhost] 2025-02-10 09:59:52.125606 | orchestrator | 2025-02-10 09:59:52.125674 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-02-10 09:59:52.125698 | orchestrator | Monday 10 February 2025 09:59:52 +0000 (0:00:04.045) 0:00:36.091 ******* 2025-02-10 09:59:56.662718 | orchestrator | changed: [localhost] 2025-02-10 09:59:56.663354 | orchestrator | 2025-02-10 09:59:56.663410 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-02-10 09:59:56.664058 | orchestrator | Monday 10 February 2025 09:59:56 +0000 (0:00:04.536) 0:00:40.628 ******* 2025-02-10 10:00:00.679031 | orchestrator | changed: [localhost] 2025-02-10 10:00:00.679273 | orchestrator | 2025-02-10 10:00:00.679301 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-02-10 10:00:00.679323 | orchestrator | Monday 10 February 2025 10:00:00 +0000 (0:00:04.017) 0:00:44.645 ******* 2025-02-10 10:00:04.270295 | orchestrator | changed: [localhost] 2025-02-10 10:00:04.271732 | orchestrator | 2025-02-10 10:00:04.271838 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-02-10 10:00:08.130747 | orchestrator | Monday 10 February 2025 10:00:04 +0000 (0:00:03.591) 0:00:48.237 ******* 2025-02-10 10:00:08.130948 | orchestrator | changed: [localhost] 2025-02-10 10:00:12.000463 | orchestrator | 2025-02-10 10:00:12.000649 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-02-10 10:00:12.000684 | orchestrator | Monday 10 February 2025 10:00:08 +0000 (0:00:03.858) 0:00:52.096 ******* 2025-02-10 10:00:12.000868 | orchestrator | changed: [localhost] 2025-02-10 10:00:30.119467 | orchestrator | 2025-02-10 10:00:30.119653 | orchestrator | TASK [Create test network topology] ******************************************** 2025-02-10 10:00:30.119677 | orchestrator | Monday 10 February 2025 10:00:11 +0000 (0:00:03.871) 0:00:55.967 ******* 2025-02-10 10:00:30.119710 | orchestrator | changed: [localhost] 2025-02-10 10:02:45.978788 | orchestrator | 2025-02-10 10:02:45.978977 | orchestrator | TASK [Create test instances] *************************************************** 2025-02-10 10:02:45.979015 | orchestrator | Monday 10 February 2025 10:00:30 +0000 (0:00:18.116) 0:01:14.083 ******* 2025-02-10 10:02:45.979063 | orchestrator | changed: [localhost] => (item=test) 2025-02-10 10:03:15.979143 | orchestrator | changed: [localhost] => (item=test-1) 2025-02-10 10:03:15.979300 | orchestrator | changed: [localhost] => (item=test-2) 2025-02-10 10:03:15.979331 | orchestrator | 2025-02-10 10:03:15.979349 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-02-10 10:03:15.979384 | orchestrator | changed: [localhost] => (item=test-3) 2025-02-10 10:03:27.786582 | orchestrator | 2025-02-10 10:03:27.786737 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-02-10 10:03:27.786780 | orchestrator | changed: [localhost] => (item=test-4) 2025-02-10 10:03:51.647831 | orchestrator | 2025-02-10 10:03:51.647949 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-02-10 10:03:51.647961 | orchestrator | Monday 10 February 2025 10:03:27 +0000 (0:02:57.663) 0:04:11.747 ******* 2025-02-10 10:03:51.647982 | orchestrator | changed: [localhost] => (item=test) 2025-02-10 10:03:51.648059 | orchestrator | changed: [localhost] => (item=test-1) 2025-02-10 10:03:51.648080 | orchestrator | changed: [localhost] => (item=test-2) 2025-02-10 10:03:51.648825 | orchestrator | changed: [localhost] => (item=test-3) 2025-02-10 10:03:51.649077 | orchestrator | changed: [localhost] => (item=test-4) 2025-02-10 10:03:51.649827 | orchestrator | 2025-02-10 10:03:51.650332 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-02-10 10:03:51.650968 | orchestrator | Monday 10 February 2025 10:03:51 +0000 (0:00:23.863) 0:04:35.611 ******* 2025-02-10 10:04:21.585979 | orchestrator | changed: [localhost] => (item=test) 2025-02-10 10:04:28.846188 | orchestrator | changed: [localhost] => (item=test-1) 2025-02-10 10:04:28.846352 | orchestrator | changed: [localhost] => (item=test-2) 2025-02-10 10:04:28.846391 | orchestrator | changed: [localhost] => (item=test-3) 2025-02-10 10:04:28.846420 | orchestrator | changed: [localhost] => (item=test-4) 2025-02-10 10:04:28.846446 | orchestrator | 2025-02-10 10:04:28.846475 | orchestrator | TASK [Create test volume] ****************************************************** 2025-02-10 10:04:28.846580 | orchestrator | Monday 10 February 2025 10:04:21 +0000 (0:00:29.937) 0:05:05.549 ******* 2025-02-10 10:04:28.846660 | orchestrator | changed: [localhost] 2025-02-10 10:04:28.846892 | orchestrator | 2025-02-10 10:04:28.846931 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-02-10 10:04:28.846965 | orchestrator | Monday 10 February 2025 10:04:28 +0000 (0:00:07.262) 0:05:12.811 ******* 2025-02-10 10:04:39.119241 | orchestrator | changed: [localhost] 2025-02-10 10:04:44.798079 | orchestrator | 2025-02-10 10:04:44.798354 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-02-10 10:04:44.798396 | orchestrator | Monday 10 February 2025 10:04:39 +0000 (0:00:10.271) 0:05:23.083 ******* 2025-02-10 10:04:44.798443 | orchestrator | ok: [localhost] 2025-02-10 10:04:44.799333 | orchestrator | 2025-02-10 10:04:44.799376 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-02-10 10:04:44.842297 | orchestrator | Monday 10 February 2025 10:04:44 +0000 (0:00:05.681) 0:05:28.764 ******* 2025-02-10 10:04:44.842401 | orchestrator | ok: [localhost] => { 2025-02-10 10:04:44.843009 | orchestrator |  "msg": "192.168.112.152" 2025-02-10 10:04:44.843119 | orchestrator | } 2025-02-10 10:04:44.843686 | orchestrator | 2025-02-10 10:04:44.844043 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-10 10:04:44.844248 | orchestrator | 2025-02-10 10:04:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-10 10:04:44.845158 | orchestrator | 2025-02-10 10:04:44 | INFO  | Please wait and do not abort execution. 2025-02-10 10:04:44.845760 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-10 10:04:44.846801 | orchestrator | 2025-02-10 10:04:44.847054 | orchestrator | 2025-02-10 10:04:44.847985 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-10 10:04:44.848537 | orchestrator | Monday 10 February 2025 10:04:44 +0000 (0:00:00.045) 0:05:28.809 ******* 2025-02-10 10:04:44.849792 | orchestrator | =============================================================================== 2025-02-10 10:04:44.850246 | orchestrator | Create test instances ------------------------------------------------- 177.66s 2025-02-10 10:04:44.851460 | orchestrator | Add tag to instances --------------------------------------------------- 29.94s 2025-02-10 10:04:44.852006 | orchestrator | Add metadata to instances ---------------------------------------------- 23.86s 2025-02-10 10:04:44.853454 | orchestrator | Create test network topology ------------------------------------------- 18.12s 2025-02-10 10:04:44.854587 | orchestrator | Add member roles to user test ------------------------------------------ 11.45s 2025-02-10 10:04:44.855232 | orchestrator | Attach test volume ----------------------------------------------------- 10.27s 2025-02-10 10:04:44.856763 | orchestrator | Create test volume ------------------------------------------------------ 7.26s 2025-02-10 10:04:44.857306 | orchestrator | Create floating ip address ---------------------------------------------- 5.68s 2025-02-10 10:04:44.857338 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.40s 2025-02-10 10:04:44.858008 | orchestrator | Create ssh security group ----------------------------------------------- 4.54s 2025-02-10 10:04:44.858468 | orchestrator | Create test server group ------------------------------------------------ 4.05s 2025-02-10 10:04:44.858810 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.02s 2025-02-10 10:04:44.859282 | orchestrator | Create test user -------------------------------------------------------- 3.97s 2025-02-10 10:04:44.859656 | orchestrator | Create test-admin user -------------------------------------------------- 3.93s 2025-02-10 10:04:44.859929 | orchestrator | Create test keypair ----------------------------------------------------- 3.87s 2025-02-10 10:04:44.860393 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.86s 2025-02-10 10:04:44.860611 | orchestrator | Create test project ----------------------------------------------------- 3.81s 2025-02-10 10:04:44.860992 | orchestrator | Create icmp security group ---------------------------------------------- 3.59s 2025-02-10 10:04:44.862305 | orchestrator | Create test domain ------------------------------------------------------ 3.40s 2025-02-10 10:04:44.862572 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-02-10 10:04:45.504475 | orchestrator | + server_list 2025-02-10 10:04:49.776687 | orchestrator | + openstack --os-cloud test server list 2025-02-10 10:04:49.776873 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-02-10 10:04:50.117866 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-02-10 10:04:50.117986 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-02-10 10:04:50.118003 | orchestrator | | f9123568-3d4b-48bb-892b-4d87ced981cf | test-4 | ACTIVE | auto_allocated_network=10.42.0.28, 192.168.112.197 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:04:50.118012 | orchestrator | | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | test-3 | ACTIVE | auto_allocated_network=10.42.0.43, 192.168.112.196 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:04:50.118065 | orchestrator | | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | test-2 | ACTIVE | auto_allocated_network=10.42.0.57, 192.168.112.106 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:04:50.118076 | orchestrator | | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | test-1 | ACTIVE | auto_allocated_network=10.42.0.7, 192.168.112.170 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:04:50.118085 | orchestrator | | 24604872-482f-4827-820f-f0851c5411da | test | ACTIVE | auto_allocated_network=10.42.0.8, 192.168.112.152 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-02-10 10:04:50.118094 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-02-10 10:04:50.118120 | orchestrator | + openstack --os-cloud test server show test 2025-02-10 10:04:54.450449 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:04:54.450649 | orchestrator | | Field | Value | 2025-02-10 10:04:54.450700 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:04:54.450718 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:04:54.450735 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:04:54.450752 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:04:54.450768 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-02-10 10:04:54.450785 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:04:54.450802 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:04:54.450820 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:04:54.450836 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:04:54.450873 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:04:54.450892 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:04:54.450920 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:04:54.450938 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:04:54.450956 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:04:54.450974 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:04:54.450991 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:04:54.451008 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:00:53.000000 | 2025-02-10 10:04:54.451026 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:04:54.451046 | orchestrator | | accessIPv4 | | 2025-02-10 10:04:54.451066 | orchestrator | | accessIPv6 | | 2025-02-10 10:04:54.451085 | orchestrator | | addresses | auto_allocated_network=10.42.0.8, 192.168.112.152 | 2025-02-10 10:04:54.451114 | orchestrator | | config_drive | | 2025-02-10 10:04:54.451143 | orchestrator | | created | 2025-02-10T10:00:38Z | 2025-02-10 10:04:54.451161 | orchestrator | | description | None | 2025-02-10 10:04:54.451177 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:04:54.451194 | orchestrator | | hostId | e62075c6fc198b6907414eb448f90a80201fb9decac992ba5dedc726 | 2025-02-10 10:04:54.451212 | orchestrator | | host_status | None | 2025-02-10 10:04:54.451229 | orchestrator | | id | 24604872-482f-4827-820f-f0851c5411da | 2025-02-10 10:04:54.451247 | orchestrator | | image | Cirros 0.6.2 (06b1191c-78fd-4344-82e0-e4f95738c41f) | 2025-02-10 10:04:54.451264 | orchestrator | | key_name | test | 2025-02-10 10:04:54.451290 | orchestrator | | locked | False | 2025-02-10 10:04:54.451308 | orchestrator | | locked_reason | None | 2025-02-10 10:04:54.451324 | orchestrator | | name | test | 2025-02-10 10:04:54.451359 | orchestrator | | progress | 0 | 2025-02-10 10:04:54.451378 | orchestrator | | project_id | 42931ebed40647ba852abf35afed76ad | 2025-02-10 10:04:54.451395 | orchestrator | | properties | hostname='test' | 2025-02-10 10:04:54.451412 | orchestrator | | security_groups | name='icmp' | 2025-02-10 10:04:54.451429 | orchestrator | | | name='ssh' | 2025-02-10 10:04:54.451447 | orchestrator | | server_groups | ['033b802c-83ea-42f2-afa7-cb6842ecb76e'] | 2025-02-10 10:04:54.451464 | orchestrator | | status | ACTIVE | 2025-02-10 10:04:54.451512 | orchestrator | | tags | test | 2025-02-10 10:04:54.451531 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:04:54.451549 | orchestrator | | updated | 2025-02-10T10:03:32Z | 2025-02-10 10:04:54.451580 | orchestrator | | user_id | 6e2da354bd3a483d81737a9269b592f2 | 2025-02-10 10:04:54.451606 | orchestrator | | volumes_attached | delete_on_termination='False', id='cf711557-4bbf-479f-a988-110b44933a0b' | 2025-02-10 10:04:54.451833 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:04:54.788865 | orchestrator | + openstack --os-cloud test server show test-1 2025-02-10 10:04:59.122192 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:04:59.122323 | orchestrator | | Field | Value | 2025-02-10 10:04:59.122344 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:04:59.122359 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:04:59.122374 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:04:59.122434 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:04:59.122452 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-02-10 10:04:59.122466 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:04:59.122546 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:04:59.122562 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:04:59.122577 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:04:59.122606 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:04:59.122624 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:04:59.122640 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:04:59.122656 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:04:59.122680 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:04:59.122696 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:04:59.122712 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:04:59.122728 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:01:32.000000 | 2025-02-10 10:04:59.122751 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:04:59.122767 | orchestrator | | accessIPv4 | | 2025-02-10 10:04:59.122783 | orchestrator | | accessIPv6 | | 2025-02-10 10:04:59.122799 | orchestrator | | addresses | auto_allocated_network=10.42.0.7, 192.168.112.170 | 2025-02-10 10:04:59.122823 | orchestrator | | config_drive | | 2025-02-10 10:04:59.122839 | orchestrator | | created | 2025-02-10T10:01:18Z | 2025-02-10 10:04:59.122855 | orchestrator | | description | None | 2025-02-10 10:04:59.122877 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:04:59.122894 | orchestrator | | hostId | f2004b91cfe5671d430bfb19e57253d6aa288525f4fd4bc7149e255d | 2025-02-10 10:04:59.122910 | orchestrator | | host_status | None | 2025-02-10 10:04:59.122934 | orchestrator | | id | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | 2025-02-10 10:04:59.122950 | orchestrator | | image | Cirros 0.6.2 (06b1191c-78fd-4344-82e0-e4f95738c41f) | 2025-02-10 10:04:59.122967 | orchestrator | | key_name | test | 2025-02-10 10:04:59.122982 | orchestrator | | locked | False | 2025-02-10 10:04:59.122996 | orchestrator | | locked_reason | None | 2025-02-10 10:04:59.123011 | orchestrator | | name | test-1 | 2025-02-10 10:04:59.123036 | orchestrator | | progress | 0 | 2025-02-10 10:04:59.123052 | orchestrator | | project_id | 42931ebed40647ba852abf35afed76ad | 2025-02-10 10:04:59.123066 | orchestrator | | properties | hostname='test-1' | 2025-02-10 10:04:59.123080 | orchestrator | | security_groups | name='icmp' | 2025-02-10 10:04:59.123094 | orchestrator | | | name='ssh' | 2025-02-10 10:04:59.123116 | orchestrator | | server_groups | ['033b802c-83ea-42f2-afa7-cb6842ecb76e'] | 2025-02-10 10:04:59.123130 | orchestrator | | status | ACTIVE | 2025-02-10 10:04:59.123145 | orchestrator | | tags | test | 2025-02-10 10:04:59.123159 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:04:59.123173 | orchestrator | | updated | 2025-02-10T10:03:37Z | 2025-02-10 10:04:59.123187 | orchestrator | | user_id | 6e2da354bd3a483d81737a9269b592f2 | 2025-02-10 10:04:59.123206 | orchestrator | | volumes_attached | | 2025-02-10 10:04:59.124995 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:04:59.485819 | orchestrator | + openstack --os-cloud test server show test-2 2025-02-10 10:05:03.762305 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:03.762401 | orchestrator | | Field | Value | 2025-02-10 10:05:03.762413 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:03.762441 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:05:03.762450 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:05:03.762458 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:05:03.762466 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-02-10 10:05:03.762474 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:05:03.762524 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:05:03.762547 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:05:03.762556 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:05:03.762573 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:05:03.762581 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:05:03.762589 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:05:03.762605 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:05:03.762613 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:05:03.762621 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:05:03.762629 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:05:03.762637 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:02:10.000000 | 2025-02-10 10:05:03.762645 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:05:03.762657 | orchestrator | | accessIPv4 | | 2025-02-10 10:05:03.762665 | orchestrator | | accessIPv6 | | 2025-02-10 10:05:03.762673 | orchestrator | | addresses | auto_allocated_network=10.42.0.57, 192.168.112.106 | 2025-02-10 10:05:03.762685 | orchestrator | | config_drive | | 2025-02-10 10:05:03.762698 | orchestrator | | created | 2025-02-10T10:01:56Z | 2025-02-10 10:05:03.762706 | orchestrator | | description | None | 2025-02-10 10:05:03.762715 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:05:03.762722 | orchestrator | | hostId | f62eaff7ec5337f85b71ba30d1bc61bb9f73154b6abe54c52002d971 | 2025-02-10 10:05:03.762730 | orchestrator | | host_status | None | 2025-02-10 10:05:03.762738 | orchestrator | | id | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | 2025-02-10 10:05:03.762750 | orchestrator | | image | Cirros 0.6.2 (06b1191c-78fd-4344-82e0-e4f95738c41f) | 2025-02-10 10:05:03.762758 | orchestrator | | key_name | test | 2025-02-10 10:05:03.762766 | orchestrator | | locked | False | 2025-02-10 10:05:03.762774 | orchestrator | | locked_reason | None | 2025-02-10 10:05:03.762782 | orchestrator | | name | test-2 | 2025-02-10 10:05:03.762798 | orchestrator | | progress | 0 | 2025-02-10 10:05:03.762807 | orchestrator | | project_id | 42931ebed40647ba852abf35afed76ad | 2025-02-10 10:05:03.762815 | orchestrator | | properties | hostname='test-2' | 2025-02-10 10:05:03.762822 | orchestrator | | security_groups | name='icmp' | 2025-02-10 10:05:03.762830 | orchestrator | | | name='ssh' | 2025-02-10 10:05:03.762839 | orchestrator | | server_groups | ['033b802c-83ea-42f2-afa7-cb6842ecb76e'] | 2025-02-10 10:05:03.762851 | orchestrator | | status | ACTIVE | 2025-02-10 10:05:03.762861 | orchestrator | | tags | test | 2025-02-10 10:05:03.762870 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:05:03.762879 | orchestrator | | updated | 2025-02-10T10:03:42Z | 2025-02-10 10:05:03.762888 | orchestrator | | user_id | 6e2da354bd3a483d81737a9269b592f2 | 2025-02-10 10:05:03.762904 | orchestrator | | volumes_attached | | 2025-02-10 10:05:03.764983 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:04.149170 | orchestrator | + openstack --os-cloud test server show test-3 2025-02-10 10:05:08.503092 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:08.503203 | orchestrator | | Field | Value | 2025-02-10 10:05:08.503217 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:08.503244 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:05:08.503255 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:05:08.503265 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:05:08.503274 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-02-10 10:05:08.503284 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:05:08.503294 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:05:08.503325 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:05:08.503337 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:05:08.503355 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:05:08.503366 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:05:08.503376 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:05:08.503391 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:05:08.503402 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:05:08.503412 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:05:08.503421 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:05:08.503431 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:02:40.000000 | 2025-02-10 10:05:08.503454 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:05:08.503464 | orchestrator | | accessIPv4 | | 2025-02-10 10:05:08.503475 | orchestrator | | accessIPv6 | | 2025-02-10 10:05:08.503505 | orchestrator | | addresses | auto_allocated_network=10.42.0.43, 192.168.112.196 | 2025-02-10 10:05:08.503521 | orchestrator | | config_drive | | 2025-02-10 10:05:08.503536 | orchestrator | | created | 2025-02-10T10:02:32Z | 2025-02-10 10:05:08.503547 | orchestrator | | description | None | 2025-02-10 10:05:08.503557 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:05:08.503568 | orchestrator | | hostId | f2004b91cfe5671d430bfb19e57253d6aa288525f4fd4bc7149e255d | 2025-02-10 10:05:08.503578 | orchestrator | | host_status | None | 2025-02-10 10:05:08.503589 | orchestrator | | id | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | 2025-02-10 10:05:08.503606 | orchestrator | | image | Cirros 0.6.2 (06b1191c-78fd-4344-82e0-e4f95738c41f) | 2025-02-10 10:05:08.503617 | orchestrator | | key_name | test | 2025-02-10 10:05:08.503629 | orchestrator | | locked | False | 2025-02-10 10:05:08.503640 | orchestrator | | locked_reason | None | 2025-02-10 10:05:08.503651 | orchestrator | | name | test-3 | 2025-02-10 10:05:08.503670 | orchestrator | | progress | 0 | 2025-02-10 10:05:08.503681 | orchestrator | | project_id | 42931ebed40647ba852abf35afed76ad | 2025-02-10 10:05:08.503693 | orchestrator | | properties | hostname='test-3' | 2025-02-10 10:05:08.503704 | orchestrator | | security_groups | name='icmp' | 2025-02-10 10:05:08.503715 | orchestrator | | | name='ssh' | 2025-02-10 10:05:08.503726 | orchestrator | | server_groups | ['033b802c-83ea-42f2-afa7-cb6842ecb76e'] | 2025-02-10 10:05:08.503741 | orchestrator | | status | ACTIVE | 2025-02-10 10:05:08.503754 | orchestrator | | tags | test | 2025-02-10 10:05:08.503765 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:05:08.503777 | orchestrator | | updated | 2025-02-10T10:03:46Z | 2025-02-10 10:05:08.503792 | orchestrator | | user_id | 6e2da354bd3a483d81737a9269b592f2 | 2025-02-10 10:05:08.503809 | orchestrator | | volumes_attached | | 2025-02-10 10:05:08.507732 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:09.063613 | orchestrator | + openstack --os-cloud test server show test-4 2025-02-10 10:05:13.125758 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:13.125886 | orchestrator | | Field | Value | 2025-02-10 10:05:13.125906 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:13.125948 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-02-10 10:05:13.125965 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-02-10 10:05:13.125981 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-02-10 10:05:13.125996 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-02-10 10:05:13.126012 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-02-10 10:05:13.126102 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-02-10 10:05:13.126117 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-02-10 10:05:13.126132 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-02-10 10:05:13.126159 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-02-10 10:05:13.126174 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-02-10 10:05:13.126188 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-02-10 10:05:13.126210 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-02-10 10:05:13.126224 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-02-10 10:05:13.126238 | orchestrator | | OS-EXT-STS:task_state | None | 2025-02-10 10:05:13.126252 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-02-10 10:05:13.126274 | orchestrator | | OS-SRV-USG:launched_at | 2025-02-10T10:03:12.000000 | 2025-02-10 10:05:13.126291 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-02-10 10:05:13.126308 | orchestrator | | accessIPv4 | | 2025-02-10 10:05:13.126324 | orchestrator | | accessIPv6 | | 2025-02-10 10:05:13.126339 | orchestrator | | addresses | auto_allocated_network=10.42.0.28, 192.168.112.197 | 2025-02-10 10:05:13.126362 | orchestrator | | config_drive | | 2025-02-10 10:05:13.126378 | orchestrator | | created | 2025-02-10T10:03:03Z | 2025-02-10 10:05:13.126400 | orchestrator | | description | None | 2025-02-10 10:05:13.126417 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-02-10 10:05:13.126432 | orchestrator | | hostId | f62eaff7ec5337f85b71ba30d1bc61bb9f73154b6abe54c52002d971 | 2025-02-10 10:05:13.126448 | orchestrator | | host_status | None | 2025-02-10 10:05:13.126468 | orchestrator | | id | f9123568-3d4b-48bb-892b-4d87ced981cf | 2025-02-10 10:05:13.126546 | orchestrator | | image | Cirros 0.6.2 (06b1191c-78fd-4344-82e0-e4f95738c41f) | 2025-02-10 10:05:13.126562 | orchestrator | | key_name | test | 2025-02-10 10:05:13.126576 | orchestrator | | locked | False | 2025-02-10 10:05:13.126590 | orchestrator | | locked_reason | None | 2025-02-10 10:05:13.126604 | orchestrator | | name | test-4 | 2025-02-10 10:05:13.126625 | orchestrator | | progress | 0 | 2025-02-10 10:05:13.126647 | orchestrator | | project_id | 42931ebed40647ba852abf35afed76ad | 2025-02-10 10:05:13.126662 | orchestrator | | properties | hostname='test-4' | 2025-02-10 10:05:13.126676 | orchestrator | | security_groups | name='icmp' | 2025-02-10 10:05:13.126695 | orchestrator | | | name='ssh' | 2025-02-10 10:05:13.126710 | orchestrator | | server_groups | ['033b802c-83ea-42f2-afa7-cb6842ecb76e'] | 2025-02-10 10:05:13.126724 | orchestrator | | status | ACTIVE | 2025-02-10 10:05:13.126738 | orchestrator | | tags | test | 2025-02-10 10:05:13.126752 | orchestrator | | trusted_image_certificates | None | 2025-02-10 10:05:13.126766 | orchestrator | | updated | 2025-02-10T10:03:51Z | 2025-02-10 10:05:13.126780 | orchestrator | | user_id | 6e2da354bd3a483d81737a9269b592f2 | 2025-02-10 10:05:13.126799 | orchestrator | | volumes_attached | | 2025-02-10 10:05:13.128996 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-02-10 10:05:13.455149 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-10 10:05:16.703000 | orchestrator | + compute_list 2025-02-10 10:05:16.703129 | orchestrator | + osism manage compute list testbed-node-3 2025-02-10 10:05:16.703178 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:05:16.995106 | orchestrator | | ID | Name | Status | 2025-02-10 10:05:16.995211 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:05:16.995223 | orchestrator | | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | test-3 | ACTIVE | 2025-02-10 10:05:16.995231 | orchestrator | | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | test-1 | ACTIVE | 2025-02-10 10:05:16.995238 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:05:16.995258 | orchestrator | + osism manage compute list testbed-node-4 2025-02-10 10:05:20.412138 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:05:20.751792 | orchestrator | | ID | Name | Status | 2025-02-10 10:05:20.751916 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:05:20.751934 | orchestrator | | f9123568-3d4b-48bb-892b-4d87ced981cf | test-4 | ACTIVE | 2025-02-10 10:05:20.751950 | orchestrator | | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | test-2 | ACTIVE | 2025-02-10 10:05:20.751980 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:05:20.752023 | orchestrator | + osism manage compute list testbed-node-5 2025-02-10 10:05:24.158354 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:05:24.587969 | orchestrator | | ID | Name | Status | 2025-02-10 10:05:24.588100 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:05:24.588119 | orchestrator | | 24604872-482f-4827-820f-f0851c5411da | test | ACTIVE | 2025-02-10 10:05:24.588134 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:05:24.588167 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-02-10 10:05:28.109803 | orchestrator | 2025-02-10 10:05:28 | INFO  | Live migrating server f9123568-3d4b-48bb-892b-4d87ced981cf 2025-02-10 10:05:37.319466 | orchestrator | 2025-02-10 10:05:37 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:05:39.898438 | orchestrator | 2025-02-10 10:05:39 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:05:42.346369 | orchestrator | 2025-02-10 10:05:42 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:05:45.119857 | orchestrator | 2025-02-10 10:05:45 | INFO  | Live migrating server 4c75af69-a3c1-48dc-87a2-dfaf02253236 2025-02-10 10:05:53.471372 | orchestrator | 2025-02-10 10:05:53 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:05:55.983710 | orchestrator | 2025-02-10 10:05:55 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:05:58.403285 | orchestrator | 2025-02-10 10:05:58 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:06:01.017434 | orchestrator | + compute_list 2025-02-10 10:06:04.468029 | orchestrator | + osism manage compute list testbed-node-3 2025-02-10 10:06:04.468304 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:06:04.758255 | orchestrator | | ID | Name | Status | 2025-02-10 10:06:04.758384 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:06:04.758403 | orchestrator | | f9123568-3d4b-48bb-892b-4d87ced981cf | test-4 | ACTIVE | 2025-02-10 10:06:04.758418 | orchestrator | | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | test-3 | ACTIVE | 2025-02-10 10:06:04.758432 | orchestrator | | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | test-2 | ACTIVE | 2025-02-10 10:06:04.758446 | orchestrator | | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | test-1 | ACTIVE | 2025-02-10 10:06:04.758460 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:06:04.758559 | orchestrator | + osism manage compute list testbed-node-4 2025-02-10 10:06:07.342850 | orchestrator | +------+--------+----------+ 2025-02-10 10:06:07.635429 | orchestrator | | ID | Name | Status | 2025-02-10 10:06:07.635611 | orchestrator | |------+--------+----------| 2025-02-10 10:06:07.635632 | orchestrator | +------+--------+----------+ 2025-02-10 10:06:07.635667 | orchestrator | + osism manage compute list testbed-node-5 2025-02-10 10:06:10.630606 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:06:10.910841 | orchestrator | | ID | Name | Status | 2025-02-10 10:06:10.910967 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:06:10.910985 | orchestrator | | 24604872-482f-4827-820f-f0851c5411da | test | ACTIVE | 2025-02-10 10:06:10.911000 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:06:10.911032 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-02-10 10:06:14.026854 | orchestrator | 2025-02-10 10:06:14 | INFO  | Live migrating server 24604872-482f-4827-820f-f0851c5411da 2025-02-10 10:06:23.000031 | orchestrator | 2025-02-10 10:06:22 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:06:25.484192 | orchestrator | 2025-02-10 10:06:25 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:06:27.940050 | orchestrator | 2025-02-10 10:06:27 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:06:30.280968 | orchestrator | 2025-02-10 10:06:30 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:06:32.950929 | orchestrator | + compute_list 2025-02-10 10:06:35.957089 | orchestrator | + osism manage compute list testbed-node-3 2025-02-10 10:06:35.957250 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:06:36.286633 | orchestrator | | ID | Name | Status | 2025-02-10 10:06:36.286756 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:06:36.286775 | orchestrator | | f9123568-3d4b-48bb-892b-4d87ced981cf | test-4 | ACTIVE | 2025-02-10 10:06:36.286814 | orchestrator | | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | test-3 | ACTIVE | 2025-02-10 10:06:36.286829 | orchestrator | | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | test-2 | ACTIVE | 2025-02-10 10:06:36.286843 | orchestrator | | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | test-1 | ACTIVE | 2025-02-10 10:06:36.286857 | orchestrator | | 24604872-482f-4827-820f-f0851c5411da | test | ACTIVE | 2025-02-10 10:06:36.286877 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:06:36.286910 | orchestrator | + osism manage compute list testbed-node-4 2025-02-10 10:06:38.849657 | orchestrator | +------+--------+----------+ 2025-02-10 10:06:39.151214 | orchestrator | | ID | Name | Status | 2025-02-10 10:06:39.151366 | orchestrator | |------+--------+----------| 2025-02-10 10:06:39.151388 | orchestrator | +------+--------+----------+ 2025-02-10 10:06:39.151425 | orchestrator | + osism manage compute list testbed-node-5 2025-02-10 10:06:41.804301 | orchestrator | +------+--------+----------+ 2025-02-10 10:06:42.106834 | orchestrator | | ID | Name | Status | 2025-02-10 10:06:42.106957 | orchestrator | |------+--------+----------| 2025-02-10 10:06:42.106986 | orchestrator | +------+--------+----------+ 2025-02-10 10:06:42.107019 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-02-10 10:06:44.935020 | orchestrator | 2025-02-10 10:06:44 | INFO  | Live migrating server f9123568-3d4b-48bb-892b-4d87ced981cf 2025-02-10 10:06:52.586075 | orchestrator | 2025-02-10 10:06:52 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:06:55.159813 | orchestrator | 2025-02-10 10:06:55 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:06:57.564628 | orchestrator | 2025-02-10 10:06:57 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:07:00.140221 | orchestrator | 2025-02-10 10:07:00 | INFO  | Live migrating server 509b6a7c-bff3-45d4-91ae-e53de879bf43 2025-02-10 10:07:07.029118 | orchestrator | 2025-02-10 10:07:07 | INFO  | Live migration of 509b6a7c-bff3-45d4-91ae-e53de879bf43 (test-3) is still in progress 2025-02-10 10:07:09.374561 | orchestrator | 2025-02-10 10:07:09 | INFO  | Live migration of 509b6a7c-bff3-45d4-91ae-e53de879bf43 (test-3) is still in progress 2025-02-10 10:07:11.815405 | orchestrator | 2025-02-10 10:07:11 | INFO  | Live migration of 509b6a7c-bff3-45d4-91ae-e53de879bf43 (test-3) is still in progress 2025-02-10 10:07:14.138251 | orchestrator | 2025-02-10 10:07:14 | INFO  | Live migrating server 4c75af69-a3c1-48dc-87a2-dfaf02253236 2025-02-10 10:07:19.981500 | orchestrator | 2025-02-10 10:07:19 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:07:22.310807 | orchestrator | 2025-02-10 10:07:22 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:07:24.807545 | orchestrator | 2025-02-10 10:07:24 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:07:27.099600 | orchestrator | 2025-02-10 10:07:27 | INFO  | Live migrating server 8fdbf131-9c67-4f96-9b77-6f066baf1527 2025-02-10 10:07:32.354421 | orchestrator | 2025-02-10 10:07:32 | INFO  | Live migration of 8fdbf131-9c67-4f96-9b77-6f066baf1527 (test-1) is still in progress 2025-02-10 10:07:34.704350 | orchestrator | 2025-02-10 10:07:34 | INFO  | Live migration of 8fdbf131-9c67-4f96-9b77-6f066baf1527 (test-1) is still in progress 2025-02-10 10:07:37.039206 | orchestrator | 2025-02-10 10:07:37 | INFO  | Live migrating server 24604872-482f-4827-820f-f0851c5411da 2025-02-10 10:07:42.004787 | orchestrator | 2025-02-10 10:07:42 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:07:44.338342 | orchestrator | 2025-02-10 10:07:44 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:07:46.819607 | orchestrator | 2025-02-10 10:07:46 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:07:49.171177 | orchestrator | 2025-02-10 10:07:49 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:07:51.793625 | orchestrator | + compute_list 2025-02-10 10:07:54.233324 | orchestrator | + osism manage compute list testbed-node-3 2025-02-10 10:07:54.233512 | orchestrator | +------+--------+----------+ 2025-02-10 10:07:54.498105 | orchestrator | | ID | Name | Status | 2025-02-10 10:07:54.498232 | orchestrator | |------+--------+----------| 2025-02-10 10:07:54.498251 | orchestrator | +------+--------+----------+ 2025-02-10 10:07:54.498284 | orchestrator | + osism manage compute list testbed-node-4 2025-02-10 10:07:57.601887 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:07:57.876044 | orchestrator | | ID | Name | Status | 2025-02-10 10:07:57.876136 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:07:57.876144 | orchestrator | | f9123568-3d4b-48bb-892b-4d87ced981cf | test-4 | ACTIVE | 2025-02-10 10:07:57.876175 | orchestrator | | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | test-3 | ACTIVE | 2025-02-10 10:07:57.876181 | orchestrator | | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | test-2 | ACTIVE | 2025-02-10 10:07:57.876187 | orchestrator | | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | test-1 | ACTIVE | 2025-02-10 10:07:57.876193 | orchestrator | | 24604872-482f-4827-820f-f0851c5411da | test | ACTIVE | 2025-02-10 10:07:57.876199 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:07:57.876217 | orchestrator | + osism manage compute list testbed-node-5 2025-02-10 10:08:00.359204 | orchestrator | +------+--------+----------+ 2025-02-10 10:08:00.640120 | orchestrator | | ID | Name | Status | 2025-02-10 10:08:00.640250 | orchestrator | |------+--------+----------| 2025-02-10 10:08:00.640272 | orchestrator | +------+--------+----------+ 2025-02-10 10:08:00.640308 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-02-10 10:08:03.512284 | orchestrator | 2025-02-10 10:08:03 | INFO  | Live migrating server f9123568-3d4b-48bb-892b-4d87ced981cf 2025-02-10 10:08:10.683732 | orchestrator | 2025-02-10 10:08:10 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:08:13.025525 | orchestrator | 2025-02-10 10:08:13 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:08:15.459051 | orchestrator | 2025-02-10 10:08:15 | INFO  | Live migration of f9123568-3d4b-48bb-892b-4d87ced981cf (test-4) is still in progress 2025-02-10 10:08:17.749865 | orchestrator | 2025-02-10 10:08:17 | INFO  | Live migrating server 509b6a7c-bff3-45d4-91ae-e53de879bf43 2025-02-10 10:08:23.346979 | orchestrator | 2025-02-10 10:08:23 | INFO  | Live migration of 509b6a7c-bff3-45d4-91ae-e53de879bf43 (test-3) is still in progress 2025-02-10 10:08:25.720223 | orchestrator | 2025-02-10 10:08:25 | INFO  | Live migration of 509b6a7c-bff3-45d4-91ae-e53de879bf43 (test-3) is still in progress 2025-02-10 10:08:28.137210 | orchestrator | 2025-02-10 10:08:28 | INFO  | Live migration of 509b6a7c-bff3-45d4-91ae-e53de879bf43 (test-3) is still in progress 2025-02-10 10:08:30.450617 | orchestrator | 2025-02-10 10:08:30 | INFO  | Live migrating server 4c75af69-a3c1-48dc-87a2-dfaf02253236 2025-02-10 10:08:35.834730 | orchestrator | 2025-02-10 10:08:35 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:08:38.272738 | orchestrator | 2025-02-10 10:08:38 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:08:40.589235 | orchestrator | 2025-02-10 10:08:40 | INFO  | Live migration of 4c75af69-a3c1-48dc-87a2-dfaf02253236 (test-2) is still in progress 2025-02-10 10:08:42.883053 | orchestrator | 2025-02-10 10:08:42 | INFO  | Live migrating server 8fdbf131-9c67-4f96-9b77-6f066baf1527 2025-02-10 10:08:48.256712 | orchestrator | 2025-02-10 10:08:48 | INFO  | Live migration of 8fdbf131-9c67-4f96-9b77-6f066baf1527 (test-1) is still in progress 2025-02-10 10:08:50.544018 | orchestrator | 2025-02-10 10:08:50 | INFO  | Live migration of 8fdbf131-9c67-4f96-9b77-6f066baf1527 (test-1) is still in progress 2025-02-10 10:08:52.845069 | orchestrator | 2025-02-10 10:08:52 | INFO  | Live migration of 8fdbf131-9c67-4f96-9b77-6f066baf1527 (test-1) is still in progress 2025-02-10 10:08:55.226873 | orchestrator | 2025-02-10 10:08:55 | INFO  | Live migrating server 24604872-482f-4827-820f-f0851c5411da 2025-02-10 10:09:00.427763 | orchestrator | 2025-02-10 10:09:00 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:09:02.817714 | orchestrator | 2025-02-10 10:09:02 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:09:05.143619 | orchestrator | 2025-02-10 10:09:05 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:09:07.507562 | orchestrator | 2025-02-10 10:09:07 | INFO  | Live migration of 24604872-482f-4827-820f-f0851c5411da (test) is still in progress 2025-02-10 10:09:10.051320 | orchestrator | + compute_list 2025-02-10 10:09:12.642396 | orchestrator | + osism manage compute list testbed-node-3 2025-02-10 10:09:12.643506 | orchestrator | +------+--------+----------+ 2025-02-10 10:09:12.911084 | orchestrator | | ID | Name | Status | 2025-02-10 10:09:12.911241 | orchestrator | |------+--------+----------| 2025-02-10 10:09:12.911276 | orchestrator | +------+--------+----------+ 2025-02-10 10:09:12.911325 | orchestrator | + osism manage compute list testbed-node-4 2025-02-10 10:09:15.346748 | orchestrator | +------+--------+----------+ 2025-02-10 10:09:15.613406 | orchestrator | | ID | Name | Status | 2025-02-10 10:09:15.613574 | orchestrator | |------+--------+----------| 2025-02-10 10:09:15.613595 | orchestrator | +------+--------+----------+ 2025-02-10 10:09:15.613628 | orchestrator | + osism manage compute list testbed-node-5 2025-02-10 10:09:18.711103 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:09:18.946953 | orchestrator | | ID | Name | Status | 2025-02-10 10:09:18.947236 | orchestrator | |--------------------------------------+--------+----------| 2025-02-10 10:09:18.947268 | orchestrator | | f9123568-3d4b-48bb-892b-4d87ced981cf | test-4 | ACTIVE | 2025-02-10 10:09:18.947284 | orchestrator | | 509b6a7c-bff3-45d4-91ae-e53de879bf43 | test-3 | ACTIVE | 2025-02-10 10:09:18.947298 | orchestrator | | 4c75af69-a3c1-48dc-87a2-dfaf02253236 | test-2 | ACTIVE | 2025-02-10 10:09:18.947312 | orchestrator | | 8fdbf131-9c67-4f96-9b77-6f066baf1527 | test-1 | ACTIVE | 2025-02-10 10:09:18.947327 | orchestrator | | 24604872-482f-4827-820f-f0851c5411da | test | ACTIVE | 2025-02-10 10:09:18.947342 | orchestrator | +--------------------------------------+--------+----------+ 2025-02-10 10:09:19.064477 | orchestrator | changed 2025-02-10 10:09:19.110410 | 2025-02-10 10:09:19.110546 | TASK [Run tempest] 2025-02-10 10:09:19.219429 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:19.234436 | 2025-02-10 10:09:19.234581 | TASK [Check prometheus alert status] 2025-02-10 10:09:19.345875 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:19.394249 | 2025-02-10 10:09:19.394355 | PLAY RECAP 2025-02-10 10:09:19.394414 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-02-10 10:09:19.394442 | 2025-02-10 10:09:19.680615 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-02-10 10:09:19.688710 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-10 10:09:20.429194 | 2025-02-10 10:09:20.429443 | PLAY [Post output play] 2025-02-10 10:09:20.458687 | 2025-02-10 10:09:20.458827 | LOOP [stage-output : Register sources] 2025-02-10 10:09:20.544742 | 2025-02-10 10:09:20.545039 | TASK [stage-output : Check sudo] 2025-02-10 10:09:21.295296 | orchestrator | sudo: a password is required 2025-02-10 10:09:21.591068 | orchestrator | ok: Runtime: 0:00:00.015928 2025-02-10 10:09:21.607733 | 2025-02-10 10:09:21.607884 | LOOP [stage-output : Set source and destination for files and folders] 2025-02-10 10:09:21.651358 | 2025-02-10 10:09:21.651611 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-02-10 10:09:21.746081 | orchestrator | ok 2025-02-10 10:09:21.757152 | 2025-02-10 10:09:21.757271 | LOOP [stage-output : Ensure target folders exist] 2025-02-10 10:09:22.288657 | orchestrator | ok: "docs" 2025-02-10 10:09:22.289258 | 2025-02-10 10:09:22.548716 | orchestrator | ok: "artifacts" 2025-02-10 10:09:22.815295 | orchestrator | ok: "logs" 2025-02-10 10:09:22.840767 | 2025-02-10 10:09:22.840936 | LOOP [stage-output : Copy files and folders to staging folder] 2025-02-10 10:09:22.881984 | 2025-02-10 10:09:22.882211 | TASK [stage-output : Make all log files readable] 2025-02-10 10:09:23.168082 | orchestrator | ok 2025-02-10 10:09:23.177739 | 2025-02-10 10:09:23.177869 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-02-10 10:09:23.223239 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:23.239924 | 2025-02-10 10:09:23.240076 | TASK [stage-output : Discover log files for compression] 2025-02-10 10:09:23.265739 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:23.285427 | 2025-02-10 10:09:23.285567 | LOOP [stage-output : Archive everything from logs] 2025-02-10 10:09:23.362817 | 2025-02-10 10:09:23.362986 | PLAY [Post cleanup play] 2025-02-10 10:09:23.386726 | 2025-02-10 10:09:23.386843 | TASK [Set cloud fact (Zuul deployment)] 2025-02-10 10:09:23.451363 | orchestrator | ok 2025-02-10 10:09:23.461597 | 2025-02-10 10:09:23.461744 | TASK [Set cloud fact (local deployment)] 2025-02-10 10:09:23.496018 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:23.510835 | 2025-02-10 10:09:23.510960 | TASK [Clean the cloud environment] 2025-02-10 10:09:24.141471 | orchestrator | 2025-02-10 10:09:24 - clean up servers 2025-02-10 10:09:24.965818 | orchestrator | 2025-02-10 10:09:24 - testbed-manager 2025-02-10 10:09:25.055259 | orchestrator | 2025-02-10 10:09:25 - testbed-node-0 2025-02-10 10:09:25.151670 | orchestrator | 2025-02-10 10:09:25 - testbed-node-1 2025-02-10 10:09:25.249041 | orchestrator | 2025-02-10 10:09:25 - testbed-node-3 2025-02-10 10:09:25.353773 | orchestrator | 2025-02-10 10:09:25 - testbed-node-4 2025-02-10 10:09:25.451094 | orchestrator | 2025-02-10 10:09:25 - testbed-node-2 2025-02-10 10:09:25.546482 | orchestrator | 2025-02-10 10:09:25 - testbed-node-5 2025-02-10 10:09:25.642784 | orchestrator | 2025-02-10 10:09:25 - clean up keypairs 2025-02-10 10:09:25.666002 | orchestrator | 2025-02-10 10:09:25 - testbed 2025-02-10 10:09:25.697589 | orchestrator | 2025-02-10 10:09:25 - wait for servers to be gone 2025-02-10 10:09:34.600903 | orchestrator | 2025-02-10 10:09:34 - clean up ports 2025-02-10 10:09:34.836201 | orchestrator | 2025-02-10 10:09:34 - 0ef370d4-f39f-4233-94fa-2f42b2770a97 2025-02-10 10:09:35.078539 | orchestrator | 2025-02-10 10:09:35 - a3bd5dfb-0b49-4703-b4bd-ba20ebbd2f2f 2025-02-10 10:09:35.316909 | orchestrator | 2025-02-10 10:09:35 - a48acd54-5d6a-4f33-8661-1a19ba302342 2025-02-10 10:09:35.504416 | orchestrator | 2025-02-10 10:09:35 - c39d7066-cd00-417d-944b-b656e8cf9331 2025-02-10 10:09:35.742892 | orchestrator | 2025-02-10 10:09:35 - ce38631a-bbdc-4d5c-bd19-0fe3676c8ddc 2025-02-10 10:09:36.028856 | orchestrator | 2025-02-10 10:09:36 - d05fb940-0e4c-46d3-bd9c-29d8b1399bfb 2025-02-10 10:09:36.209970 | orchestrator | 2025-02-10 10:09:36 - e843c608-a0ea-47b5-a0f5-4a3aeda9e73f 2025-02-10 10:09:36.536884 | orchestrator | 2025-02-10 10:09:36 - clean up volumes 2025-02-10 10:09:36.687178 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-0-node-base 2025-02-10 10:09:36.732008 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-2-node-base 2025-02-10 10:09:36.771953 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-4-node-base 2025-02-10 10:09:36.814588 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-5-node-base 2025-02-10 10:09:36.856108 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-1-node-base 2025-02-10 10:09:36.896078 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-3-node-base 2025-02-10 10:09:36.935751 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-11-node-5 2025-02-10 10:09:36.976460 | orchestrator | 2025-02-10 10:09:36 - testbed-volume-17-node-5 2025-02-10 10:09:37.015305 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-13-node-1 2025-02-10 10:09:37.055193 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-14-node-2 2025-02-10 10:09:37.103307 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-3-node-3 2025-02-10 10:09:37.148230 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-manager-base 2025-02-10 10:09:37.190091 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-15-node-3 2025-02-10 10:09:37.231935 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-4-node-4 2025-02-10 10:09:37.275199 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-7-node-1 2025-02-10 10:09:37.315601 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-5-node-5 2025-02-10 10:09:37.356572 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-12-node-0 2025-02-10 10:09:37.403742 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-6-node-0 2025-02-10 10:09:37.449377 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-1-node-1 2025-02-10 10:09:37.499615 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-2-node-2 2025-02-10 10:09:37.542328 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-9-node-3 2025-02-10 10:09:37.587698 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-10-node-4 2025-02-10 10:09:37.634482 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-16-node-4 2025-02-10 10:09:37.681193 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-0-node-0 2025-02-10 10:09:37.726310 | orchestrator | 2025-02-10 10:09:37 - testbed-volume-8-node-2 2025-02-10 10:09:37.770812 | orchestrator | 2025-02-10 10:09:37 - disconnect routers 2025-02-10 10:09:37.870167 | orchestrator | 2025-02-10 10:09:37 - testbed 2025-02-10 10:09:38.566504 | orchestrator | 2025-02-10 10:09:38 - clean up subnets 2025-02-10 10:09:38.604702 | orchestrator | 2025-02-10 10:09:38 - subnet-testbed-management 2025-02-10 10:09:38.724466 | orchestrator | 2025-02-10 10:09:38 - clean up networks 2025-02-10 10:09:38.884534 | orchestrator | 2025-02-10 10:09:38 - net-testbed-management 2025-02-10 10:09:39.144668 | orchestrator | 2025-02-10 10:09:39 - clean up security groups 2025-02-10 10:09:39.176865 | orchestrator | 2025-02-10 10:09:39 - testbed-node 2025-02-10 10:09:39.257818 | orchestrator | 2025-02-10 10:09:39 - testbed-management 2025-02-10 10:09:39.347782 | orchestrator | 2025-02-10 10:09:39 - clean up floating ips 2025-02-10 10:09:39.381762 | orchestrator | 2025-02-10 10:09:39 - 81.163.193.97 2025-02-10 10:09:39.758711 | orchestrator | 2025-02-10 10:09:39 - clean up routers 2025-02-10 10:09:39.809663 | orchestrator | 2025-02-10 10:09:39 - testbed 2025-02-10 10:09:40.683890 | orchestrator | changed 2025-02-10 10:09:40.721898 | 2025-02-10 10:09:40.722009 | PLAY RECAP 2025-02-10 10:09:40.722063 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-02-10 10:09:40.722088 | 2025-02-10 10:09:40.848699 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-10 10:09:40.851754 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-10 10:09:41.571803 | 2025-02-10 10:09:41.572009 | PLAY [Base post-fetch] 2025-02-10 10:09:41.607243 | 2025-02-10 10:09:41.607442 | TASK [fetch-output : Set log path for multiple nodes] 2025-02-10 10:09:41.675487 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:41.687573 | 2025-02-10 10:09:41.687824 | TASK [fetch-output : Set log path for single node] 2025-02-10 10:09:41.751062 | orchestrator | ok 2025-02-10 10:09:41.761238 | 2025-02-10 10:09:41.761378 | LOOP [fetch-output : Ensure local output dirs] 2025-02-10 10:09:42.242380 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/work/logs" 2025-02-10 10:09:42.519074 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/work/artifacts" 2025-02-10 10:09:42.786251 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/eea07dfd5b714acba1304c52e3867367/work/docs" 2025-02-10 10:09:42.811802 | 2025-02-10 10:09:42.811981 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-02-10 10:09:43.608326 | orchestrator | changed: .d..t...... ./ 2025-02-10 10:09:43.608836 | orchestrator | changed: All items complete 2025-02-10 10:09:43.608899 | 2025-02-10 10:09:44.206992 | orchestrator | changed: .d..t...... ./ 2025-02-10 10:09:44.810885 | orchestrator | changed: .d..t...... ./ 2025-02-10 10:09:44.838391 | 2025-02-10 10:09:44.838537 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-02-10 10:09:44.880103 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:44.887011 | orchestrator | skipping: Conditional result was False 2025-02-10 10:09:44.943334 | 2025-02-10 10:09:44.943440 | PLAY RECAP 2025-02-10 10:09:44.943491 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-02-10 10:09:44.943518 | 2025-02-10 10:09:45.064226 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-10 10:09:45.067276 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-10 10:09:45.771678 | 2025-02-10 10:09:45.771829 | PLAY [Base post] 2025-02-10 10:09:45.800165 | 2025-02-10 10:09:45.800292 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-02-10 10:09:46.668762 | orchestrator | changed 2025-02-10 10:09:46.707047 | 2025-02-10 10:09:46.707177 | PLAY RECAP 2025-02-10 10:09:46.707244 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-02-10 10:09:46.707306 | 2025-02-10 10:09:46.824104 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-10 10:09:46.827253 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-02-10 10:09:47.573288 | 2025-02-10 10:09:47.573459 | PLAY [Base post-logs] 2025-02-10 10:09:47.589750 | 2025-02-10 10:09:47.589888 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-02-10 10:09:48.052249 | localhost | changed 2025-02-10 10:09:48.058626 | 2025-02-10 10:09:48.058882 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-02-10 10:09:48.102356 | localhost | ok 2025-02-10 10:09:48.112848 | 2025-02-10 10:09:48.112997 | TASK [Set zuul-log-path fact] 2025-02-10 10:09:48.134015 | localhost | ok 2025-02-10 10:09:48.150246 | 2025-02-10 10:09:48.150356 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 10:09:48.186693 | localhost | skipping: Conditional result was False 2025-02-10 10:09:48.192270 | 2025-02-10 10:09:48.192456 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 10:09:48.232298 | localhost | ok 2025-02-10 10:09:48.237201 | 2025-02-10 10:09:48.237349 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 10:09:48.284591 | localhost | skipping: Conditional result was False 2025-02-10 10:09:48.293573 | 2025-02-10 10:09:48.293846 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-10 10:09:48.322254 | localhost | skipping: Conditional result was False 2025-02-10 10:09:48.330749 | 2025-02-10 10:09:48.330978 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-10 10:09:48.358384 | localhost | skipping: Conditional result was False 2025-02-10 10:09:48.364784 | 2025-02-10 10:09:48.364960 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-10 10:09:48.391144 | localhost | skipping: Conditional result was False 2025-02-10 10:09:48.405031 | 2025-02-10 10:09:48.405218 | TASK [upload-logs : Create log directories] 2025-02-10 10:09:48.926077 | localhost | changed 2025-02-10 10:09:48.930469 | 2025-02-10 10:09:48.930577 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-02-10 10:09:49.459696 | localhost -> localhost | ok: Runtime: 0:00:00.007486 2025-02-10 10:09:49.466047 | 2025-02-10 10:09:49.466193 | TASK [upload-logs : Upload logs to log server] 2025-02-10 10:09:50.038072 | localhost | Output suppressed because no_log was given 2025-02-10 10:09:50.043522 | 2025-02-10 10:09:50.043713 | LOOP [upload-logs : Compress console log and json output] 2025-02-10 10:09:50.125974 | localhost | skipping: Conditional result was False 2025-02-10 10:09:50.145504 | localhost | skipping: Conditional result was False 2025-02-10 10:09:50.160681 | 2025-02-10 10:09:50.160877 | LOOP [upload-logs : Upload compressed console log and json output] 2025-02-10 10:09:50.237070 | localhost | skipping: Conditional result was False 2025-02-10 10:09:50.237798 | 2025-02-10 10:09:50.249258 | localhost | skipping: Conditional result was False 2025-02-10 10:09:50.259322 | 2025-02-10 10:09:50.259484 | LOOP [upload-logs : Upload console log and json output]